DepSpid:修订间差异
跳转到导航
跳转到搜索
小无编辑摘要 |
无编辑摘要 |
||
第1行: | 第1行: | ||
{{Infobox Project | |||
| name =DepSpid | |||
| logo =[[Image:DepSpid_Logo.png|230px]] | |||
| screenshot = | |||
| caption =无屏保图形 | |||
| developer =[http://www.depspid.net/contact.php Bjoern Henke] | |||
| released =2006年11月 | |||
| operating system =Windows | |||
| platform =[[BOINC]] | |||
| program info = | |||
| work unit info = | |||
| status =已结束/关闭注册 | |||
| genre =网络类 | |||
| optimization =无 | |||
| website =http://www.depspid.net/ | |||
}} | |||
[[DepSpid]] is still under development, as is PerlBOINC (PerlBOINC is an attempt to implement the BOINC server system in Perl programming language). The DepSpid application currently runs under windows only. There may be a linux application sometime in the future but this is not sure yet. | |||
DepSpid is a distributed type of a web crawler (like the ones used by search engines) and has two major goals: | |||
1st: Build up a database containing the dependencies between individual web sites and groups of web sites. | |||
2nd: Collect statistical data about the structure of the web. | |||
All information collected by the spider will be made publically available. | |||
[[Category:分布式计算项目]][[Category:网络类项目]][[Category:BOINC 平台上的项目]][[Category:DepSpid]] | [[Category:分布式计算项目]][[Category:网络类项目]][[Category:BOINC 平台上的项目]][[Category:DepSpid]] |
2009年11月30日 (一) 19:23的版本
DepSpid | |
---|---|
![]() DepSpid logo | |
无屏保图形 | |
开发者 | Bjoern Henke |
版本历史 | 2006年11月 |
运算平台 | Windows |
项目平台 | BOINC |
程序情况 | |
任务情况 | |
项目状态 | 已结束/关闭注册 |
项目类别 | 网络类 |
优化程序 | 无 |
计算特点 | CPU密集: |
官方网址 | DepSpid |
![]() |
[{{{rss}}} 通过 RSS 获取项目新闻] |
DepSpid is still under development, as is PerlBOINC (PerlBOINC is an attempt to implement the BOINC server system in Perl programming language). The DepSpid application currently runs under windows only. There may be a linux application sometime in the future but this is not sure yet.
DepSpid is a distributed type of a web crawler (like the ones used by search engines) and has two major goals:
1st: Build up a database containing the dependencies between individual web sites and groups of web sites. 2nd: Collect statistical data about the structure of the web.
All information collected by the spider will be made publically available.