- ペーパーバック: 402ページ
- 出版社: Oreilly & Associates Inc (2003/11)
- 言語: 英語
- ISBN-10: 0596005776
- ISBN-13: 978-0596005771
- 発売日： 2003/11
- 商品パッケージの寸法: 15.2 x 2.5 x 22.9 cm
- おすすめ度： 1 件のカスタマーレビュー
- Amazon 売れ筋ランキング: 洋書 - 335,882位 (洋書の売れ筋ランキングを見る)
Spidering Hacks (英語) ペーパーバック – 2003/11
Kindle 端末は必要ありません。無料 Kindle アプリのいずれかをダウンロードすると、スマートフォン、タブレットPCで Kindle 本をお読みいただけます。
The Internet, with its profusion of information, has made us hungry for ever more, ever better data. Out of necessity, many of us have become pretty adept with search engine queries, but there are times when even the most powerful search engines aren't enough. If you've ever wanted your data in a different form than it's presented, or wanted to collect data from several sites and see it side-by-side without the constraints of a browser, then "Spidering Hacks" is for you.
"Spidering Hacks" takes you to the next level in Internet data retrieval--beyond search engines--by showing you how to create spiders and bots to retrieve information from your favorite sites and data sources. You'll no longer feel constrained by the way host sites think you want to see their data presented--you'll learn how to scrape and repurpose raw data so you can view in a way that's meaningful to you.
Written for developers, researchers, technical assistants, librarians, and power users, "Spidering Hacks" provides expert tips on spidering and scraping methodologies. You'll begin with a crash course in spidering concepts, tools (Perl, LWP, out-of-the-box utilities), and ethics (how to know when you've gone too far: what's acceptable and unacceptable). Next, you'll collect media files and data from databases. Then you'll learn how to interpret and understand the data, repurpose it for use in other applications, and even build authorized interfaces to integrate the data into your own content. By the time you finish "Spidering Hacks," you'll be able to: Aggregate and associate data from disparate locations, then store and manipulate the data as you like Gain a competitive edge in business by knowing when competitors' products are on sale, and comparing sales ranks and product placement on e-commerce sites Integrate third-party data into your own applications or web sites Make your own site easier to scrape and more usable to others Keep up-to-date with your favorite comics strips, news stories, stock tips, and more without visiting the site every day Like the other books in O'Reilly's popular Hacks series, "Spidering Hacks" brings you 100 industrial-strength tips and tools from the experts to help you master this technology. If you're interested in data retrieval of any type, this book provides a wealth of data for finding a wealth of data.
Kevin Hemenway, coauthor of Mac OS X Hacks, is better known as Morbus Iff, the creator of disobey.com, which bills itself as "content for the discontented." Publisher and developer of more home cooking than you could ever imagine, he'd love to give you a Fry Pan of Intellect upside the head. Politely, of course. And with love.
Amazon.com で最も参考になったカスタマーレビュー (beta)
She and Kevin Hemenway have taken a fairly complex topic, spidering and scraping web sites and reduced it to manageable chunks in their hundred hacks. The writing has the same light, readable feel you can quickly grow to expect from O'Reilly. Certainly I have never found myself faulting their editing.
There are some caveats. It seems that O'Reilly and Dornfest (the Editor of this book and the series) have fallen in love with having a hundred hacks and little in the way of an introduction. I think this may have been a better book if it was done as 90 `hacks' and had a much larger introduction as the first chapters hacks are all too light and more truly introductory material such as how a HTML page is built and how to properly register your spider. Given that only someone with a fair amount of web knowledge is going to consider spidering a website in the first place then this early material is way too slight. From Hack 9 on it quickly gets down to useful and informative chunks in each and no longer feels `lightweight'.
This may be a reflection on trying to extend the `Hacks' series into places it has to be forced. While the format worked well for Google and Amazon I felt the entire topic of eBay too light for a topic in this series and perhaps spidering is too heavy or complex. If this book had been written in a more traditional format some of my complaints would disappear.
All the examples are in Perl and the serious part of the book starts with examples using LWP::Simple to grab a page before going on to LWP::UserAgent and much more complex requests using authentication, custom headers and posting form data. It also covers using curl and wget.
Then it gets down to the nitty gritty of scraping using HTML:Treebuilder and HTML:TokeParser. This is all further expanded through the next few hacks until starting at Hack 39 through to 89 there are a good series of examples (perhaps a few too many). Finally there are two chapters on maintaining your collection and `Giving Back To The World' which tells how to make it easy to scrape your site and using RSS.
O'Reilly have a page for the book with ten example hacks, index, Table of Contents and errata and you can also visit hacks.oreilly.com for the same ten hacks with the possibility of more being added.
As a whole this volume seems a little thin. If you've been doing the maths then you've realised that only about thirty of the hundred hacks actually give any details on building and running a serious web spider. Sure, a number of the examples provide good information on how to perform various tasks and some of the last eleven hacks are good to know but in all the book feels like it lacks solid information throughout. A bit more information on various crawling and page parsing techniques would have been good.
After that criticism I'm now surprising myself, I'm going to recommend this book. This isn't a large field and when you consider that most other books on writing spiders and crawlers are less than practical and more than expensive "Spidering Hacks" has many good points. It's written for the practical Perl programmer, it examines several methods and gives lots of examples and while not cheap it's certainly inexpensive. Given that I found it both useful and inspiring the complaints above may be a little like nitpicking. I should also say that I found this volume immensely useful in writing my own spider and scraper (it gets a list of new books from the web sites of several publishers.) I have to be honest and admit that there are three publishers, O'Reilly, Addison Wesley and Prentice Hall, from whom I expect a decent standard and criticise a little harder when they move from that norm. If this book had come from SAMS or Wrox I may well have not looked quite so hard for flaws and been a little more generous in my treatment of the ones I found.
That said, I recommend this book to you if you want a practical introduction to building a web spider in Perl.
Spidering is the way that search engines gather their data. But you do not have to be Altavista or Google to use spiders. Nor do you have to be scanning a large fraction of the Web. The authors demistify spiders. If you can follow their examples, then you get concrete instances of usage that might help your particular application.
Thoughtfully, the examples are mostly written in Perl, with a few in Java. These languages should be familiar to many. Though even if you don't know them, the logic of the code can still be useful. (That is, you can treat the code as pseudocode.)
While spiders are probably best known as being used by search engines, they are really only the starting point for the latter. The much harder problems start when you have the data amassed by a spider. Now you have to efficiently find correlations between the various web pages. You should be aware that the book does not discuss these with any significant depth. Not surprising, because these are outside the scope of the book. The examples do show how to use the data found by spiders. But most of these are for web pages that sit in a given domain. So the pages are closely affiliated in content and structure.
This book demonstrates everything I like in a technical book. It not only describes how things are done. It also gives practical examples of how the technology can be useful in the real world, and presents them enthusiastically. It makes you want to go out and implement all of the ideas and to keep on going with some of your own.
Nitpicks I have with the book are minor. The 'Hacks' format seems imposed, for example, hack #8 is about installing CPAN. I don't think that section should be left out, but I don't think it's a hack either. But hey, I don't care that much about the structure as long as it isn't an imposing flaw and the content within the structure is great, as it is with this book.
Have to say, O'Reilly is on a roll with the Hacks series. They have all been fine books.
Enter Kevin Hemenway and Tara Calishain's latest O'Reilly book: Spidering Hacks. Continuing in the Oreilly "Hacks" tradition, this comprehensive guidebook provides a hundred clear, useful tools for designing and implementing the next generation -- or maybe just your own customized -- spider (or bot, if you prefer.)
So why build your own spider? Well, if you have a large website, your spider could check link integrity, HTML standards and check meta-tags. If you are researching a topic and Google is not returning what you want, creating your own spider might be just what you need. This handy book (with examples in Perl) will show you how to:
* Create a site-friendly bot that wont get you banned by webmasters (Hack #16 --Respecting your Scrapee's Bandwidth, and Hack # 17 -- Respecting robots.txt)
* Interested in graphics, audio and video? Hacks #33 through #42 step you through collecting media files. Specific examples including scraping films from [...] (Hack #24), gathering movies from the Library of Congress (Hack #35) and archiving images from Webshots. You'll have your own personalized library in no time.
* Weblog-Free Google Results -- Weblogs (aka Blogs) are amazingly popular these days. With Google's pagerank algorithm, that means they get heavy emphasis in your search results. Hack #50 skims down the search results by eliminating those annoying Blogs.
In addition, you'll find multiple hacks covering Amazon.com and RSS Feeds. The book includes much information regarding spider automation (e.g. Cron jobbing your spiders.) You'll find content filtering and and even a hack using PHP code(Hack #84.)
This book is extraordinarily helpful and is a great resource for any PERL hacker. I highly recommend it to any computer hobbyist interesting in data mining and spidering and scraping. Well done, O'Reilly!
- 洋書 > Computers & Technology > Business & Management > Privacy
- 洋書 > Computers & Technology > Databases > Data Mining
- 洋書 > Computers & Technology > Networking > Network Administration
- 洋書 > Computers & Technology > Networking > Network Security
- 洋書 > Computers & Technology > Programming > Languages & Tools
- 洋書 > Education & Reference