今天就跟大家聊聊有关怎么使用Ruby和Nokogiri模拟爬虫导出RSS种子,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
创新互联专注于网站建设|网站建设维护|优化|托管以及网络推广,积累了大量的网站设计与制作经验,为许多企业提供了网站定制设计服务,案例作品覆盖iso认证等行业。能根据企业所处的行业与销售的产品,结合品牌形象的塑造,量身开发品质网站。
# encoding: utf-8 require 'thread' require 'nokogiri' require 'open-uri' require 'rss/maker' $result=Queue.new def extract_readme_header(no,name,url) frame = Nokogiri::HTML(open(url)) return unless frame readme=$url+frame.css('frame')[1]['src'] return unless readme open(readme) do |f| doc = Nokogiri::HTML(f.read) text=doc.css("div#content div#filecontents p")[0..4].map { |c| c.content }.join(" ").strip return if text.length==0 if text !~ /(rails)|(activ_)/i puts "========= #{no} #{name} : #{text[0..50]}" date = f.last_modified $result << [no,name,readme,date,text] end end rescue puts $!.to_s end def make_rss(items) RSS::Maker.make("2.0") do |m| m.channel.title = "GtitHub recently updated projects" m.channel.link = "http://localhost" m.channel.description = "GitHub recently updated projects" m.items.do_sort = true items.each do |no,name,url,date,descr| i = m.items.new_item i.title = name i.link = url i.description=descr i.date = date end end end ############################## M A I N ######################## ############# Scan list of recent project lth=[] $url="http://rdoc.info" puts "get url #{$url}..." doc = Nokogiri::HTML(open($url)) doc.css('ul.libraries')[1].css('li').each_with_index do |li,i| aname =li.css('a').first name=aname.content purl=$url+aname['href'] lth << Thread.new(i,name,purl) { |j,n,u| extract_readme_header(j,n,u) } end ################ wait all readme are read lth.each { |th| th.join() } ################ dequeue results and sort them by date descending result=[] result << $result.shift while $result.size>0 result.sort! { |a,b| a[0] <=> b[0] } ################ format results in rss File.open("RubyFeeds.rss","w") do |file| file.write make_rss(result) end
看完上述内容,你们对怎么使用Ruby和Nokogiri模拟爬虫导出RSS种子有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注创新互联行业资讯频道,感谢大家的支持。