Content scraping is a big issue on today's Internet, and a challenge for search engines to get it right. It basically comes down to the question of attribution: who is the original author of a piece of content and on which website was it published first. Search engines get that right most of the time. Sometimes though they fail and it hurts the original content creator badly when that happens.
Why is it bad? Because search engines rank content in their search results. If they believe you are the creator you are usually ranked higher than a site that has copied the content, if such a site is ranked at all.
The reality is this. Sites that scrape contents are easy to setup (using RSS feed mostly), require barely any maintenance and earn good money when done in bulk. These sites publish copied contents shortly after they have been published on the original website. As long as people can make money from this method, they will use it to do just that.
Fat Pings are one way to resolve the situation. The idea is simple: when you publish an article you ping a trusted source to confirm that you site is the original location of that article. It does not really matter if you do it 10 seconds before the scraper does the same or one hour, it is only important that you do it before the scraper does. This obviously has consequences for original content creators who do not make use of Fat Pings, as it is quite possible that scraper sites may make use of Fat Pings for an extra advantage.
How do you cope with that situation? You configure all of your sites to support Fat Pings. Here is how that is done.
If your blog is hosted on WordPress.com or Blogger, then you do not need to do anything. Fat Pings are automatically sent out. If you are running a self hosted WordPress blog, you can install the free PubSubHubbub extension to inform search engines when they blog has been updated.