找回密码
 新注册用户
搜索
楼主: vmzy

[独立平台] [生命科学类] Folding@Home

[复制链接]
发表于 2010-1-15 17:54:54 | 显示全部楼层

回复 #31 Bismarck 的帖子

很考验手艺......
回复

使用道具 举报

发表于 2010-1-15 19:15:35 | 显示全部楼层
楼上两位兄弟都很会折腾
回复

使用道具 举报

发表于 2010-1-15 19:45:47 | 显示全部楼层
或者……装个扁形水冷头~
回复

使用道具 举报

发表于 2010-1-17 10:47:00 | 显示全部楼层
水冷排和水泵呢?成本不是一般的高
回复

使用道具 举报

发表于 2010-1-18 15:58:55 | 显示全部楼层

Major new result from Folding@home: simulation of the millisecond timescale

JANUARY 17, 2010
Paper #72: Major new result from Folding@home: simulation of the millisecond timescale

Simulating protein folding on the millisecond timescale has been a major challenge for many years.  When we started Folding@home, our first goal was to break the microsecond barrier.  This barrier is 1000x fold harder and represents a major step forward in molecular simulation.

Specifically, in a recent paper (http://pubs.acs.org/doi/abs/10.1021/ja9090353), Folding@home researchers Vincent Voelz, Greg Bowman, Kyle Beauchamp, and Vijay Pande have broken this barrier. The movie below is of one of the trajectories that folded (i.e. started unfolded and ended up in the folded state). From simulations like these, we have found some new surprises in how proteins fold. Please see the paper (url above) for more details.

Why is this important?  This is important since protein misfolding occurs on long timescales and this first simulation on the millisecond simulation for protein folding means we have demonstrated our new Markov State Model (MSM) technology can successfully simulate very long timescales.  It make sense to go after protein folding first, since there is a wealth of experimental data for us to test our simulations.

While this paper on protein folding has just come out, we have already been using this MSM technology to study protein misfolding in Alzheimer's Disease, following up from our 2008 paper. While our previous paper was able to get to long enough timescales to see small molecular weight oligomers, this new methodology gives us hope to push further with our simulations of Alzheimer's, making more direct connections to larger, more complex Abeta oligomers than we were previously able to do.

评分

参与人数 1基本分 +5 收起 理由
BiscuiT + 5

查看全部评分

回复

使用道具 举报

发表于 2010-1-18 16:20:17 | 显示全部楼层
有点看不懂,我是AMD620X4,4GDDR3-1066,如果有4核优化包请告知,让速度更快点。
回复

使用道具 举报

发表于 2010-1-18 16:32:07 | 显示全部楼层

回复 #37 卡西莫多 的帖子

eqzero 兄弟有个A2核心的SMP,论坛和谐前有教程和绿色包下载,现在看不到了

兄弟可以向他索取绿色包,一般4核都能上4.5K
回复

使用道具 举报

发表于 2010-1-20 08:47:56 | 显示全部楼层

Major power outage at Stanford

JANUARY 19, 2010
Major power outage at Stanford

The electrical power is out at most Stanford buildings as of 5:20 this morning (Tuesday, Jan 19).  SHC Engineering and Maintenance reports that the Stanford Cogen power plant (the power plan that powers Stanford) is currently off line.  Emergency generators in most of our server rooms are operating, but one room (associated with VSPG* servers) is currently without power.

We do not currently have an estimate of when power will be restored.  Moreover, while much of FAH is still up at the moment, we may have to take servers down if the temperature in the server rooms gets too high (the cooling is down as well).

We'll update the blog as we get news.  You can also see which servers are up or down on our serverstat page.

UPDATE:  3pm Pacific Time
It took a long time for the server room in the Computer Science Building to come back on line, but it's back now.  We are checking out the servers and restarting binaries to serve FAH WUs.  Looks like this is basically over.  Due to redundancy in our use of server rooms on campus, most of FAH was up even while the whole University was in the dark, so while this outage was non-ideal, luckily most of FAH was working during the outage.

评分

参与人数 1基本分 +5 收起 理由
BiscuiT + 5

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2010-1-27 16:15:37 | 显示全部楼层
Mon Jan 25, 2010 2:02 am
upcoming release of SMP2 cores
After a long development process, we are excited to announce the upcoming release of SMP2 (threads-based SMP) cores to public testing. The first SMP2-based core is the A3 core, and it will soon become available on advanced methods for OS/X (Intel), 64-bit Linux, and Windows. We are still doing development work to refine the A3 core, but it is at the point where we are ready for public testing.

We are excited about the SMP2 cores because the threads-based parallelization allows us to dispense with the MPI-based parallelization that added an extra layer of complexity and was particularly troublesome for Windows users. We anticipate phasing out the earlier SMP cores and work units in favor of these new ones; at this point in the changeover process, our Windows SMP client will still require MPI to be installed so that the client can handle an A1 work unit if no A3 work units are available. In the near future we will release an updated Windows client that does not require MPI.

The SMP2 cores require a client update; please upgrade your SMP-capable clients to at least version 6.29. We will gradually discontinue SMP projects for earlier clients.

Important: the SMP2 cores use the early-completion bonus system that we piloted with the bigadv work units. We have revamped the benchmarking system to work with this bonus system. The base point values for SMP2 work units will appear low; the benchmarked points values **include bonuses.** Some third-party utilities have been updated to include these bonuses in their calculations.

Please see an accompanying post regarding the bonus system.

One important part of the bonus system is that users:
1. Must use a passkey to receive bonus points
2. Must successfully return >=10 A2 or SMP2 work units with their passkey to receive bonus points
3. Must successfully return >80% of A2 or SMP2 work units to receive bonus points

We will shortly perform a limited "reset" of the bonus-qualifying work unit history. Important: users who have qualified for a bonus will remain bonus-qualified. We will also maintain the % returned for users but will reduce the overall counts to 10. As we do not have an automated "timeout" for bonus qualification history, we may perform rare periodic such resets in the future.

Thanks! We are excited to release these new cores to the public.
大意:
SMP2即将开始公测
SMP2使用A3计算内核,并且开始使用bigadv的积分奖励机制,越早上传奖励分越多。
参加办法:1、安装最新的6.29 SMP版客户端(如果已经安装了6.24,可以下单独的exe覆盖更新)。2、加advmethod参数,参加公测。
注意事项:1、虽然SMP2并不使用MPI了,但是现在测试任务不多,为了防止FAH停工,您最好还是老老实实把Deino或MPICH装了,免得服务器发了SMP1的任务,没法计算。2、苹果PPC不参加测试。
得奖励分条件:1、必须设置密码(passkey)。2、必需至少完成10个A2或A3任务才能开始奖励积分。3、计算结果按时成功返回率大于80%(如果机器不稳定,老出错或者您老是手动删除任务,可就没奖励积分了)。4、返回时间必须在preferred deadline之内。

译者注:
passkey申请页面地址:http://fah-web.stanford.edu/cgi-bin/getpasskey.py(注:用户名区分大小写)
根据官方内测数据,Q6600-2.4G是个分界点。如果您的机器比这个慢,对于您而言PPD:A1<A3<A2。如果您的机器比这个快,对于您而言PPD:A1<A2<A3。
下面是内测A3的PPD,仅供大家参考:
Core 2 Duo T9300 (45nm) 2.5Ghz: 2000 PPD on projects 6012, 6014, 6015
Core 2 Duo Xeon 3075 (65nm) 2.67 Ghz: 1950 PPD on projects 6012, 6014, 6015

评分

参与人数 1基本分 +30 维基拼图 +12 收起 理由
BiscuiT + 30 + 12

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2010-1-27 16:23:46 | 显示全部楼层
Mon Jan 25, 2010 2:03 am
Points system for SMP2 work units
The SMP2 Core A3 work units mark the debut of a new points system. We have been testing the key element of this system--early completion bonuses--in the bigadv work unit program. Please refer to this document with for a more detailed explanation of the points system. We are also changing our benchmark system over to a Core i5. Points have been calibrated against previous benchmarking setups, as described below.

Introduction
Points are a key aspect of distributed computing, such as Folding@home (FAH), as it both works to indicate to donors how much they have contributed, as well as foster friendly competition between donors that has always been an essential part of distributed computing. Folding@home’s point system is based on the concept of a benchmark machine, i.e. a particular class of hardware which we use as a standard to define how many points a given calculation should get. The choice of this benchmark machine can have implications for points for donors. Moreover, how we use this benchmark is important.

Our benchmarking philosophy tries to balance two elements: keeping a system reasonably simple (both for donors and for the FAH team to calculate) as well as keeping points in alignment with scientific value of a given calculation. Indeed, donors will optimize their machines (eg choice of hardware, choice of clients, etc) based on points, so it is important that points awarded be reflective of the scientific gain.

While our basic benchmark idea is pretty simple, this document is fairly long in order to give donors full details about how we have chosen the benchmark machine as well as giving detailed information of this machine and how this could impact points for donors.

Benchmark philosophy
Our philosophy is pretty simple: we would like to standardize benchmarks to a single machine and standardize and simplify the bonus schemes now employed. Bonuses have played a key role in aligning points with science and we will continue to use them. For example, returning work units (WUs) promptly can be very important for the science we’re doing, so we provide bonuses for this, especially with the high performance clients.

Machines used in comparison
We chose a 2.2 GHz E6600 as the prototype dual-core machine and a Q6600 at either 2.4 GHz or 3.2 GHz as the prototype quad-core.
The new benchmark machine is a Core i5-750 with Turbo Mode off. We compare single-core performance to the old benchmark machine, a 2.8 GHz Pentium 4.

FAH Projects used in the comparison
We base comparisons to the single-core benchmark machine on projects 4442 and 6315, comparing single-core speed on the 2.8 Ghz Pentium 4 to ideal quad-core speed on the 2.6 Ghz Core i5 machine.
We base comparisons to quad-core machines on project 2671.
We base comparisons to dual-core machines on project 6012.

Results.
Machine: Performance relative to Core i5:
P4 2.8 0.098 (on project 4442)
P4 2.8 0.12 (on project 6315)
E6600 0.30
Q6600-3.2 1.1
Q6600-2.4 0.82

Based on these multiplicative speed factors, we can project ppd output based on either the A1 or the A2 benchmarking standards.
Machine: A1 ppd: A2 ppd:
E6600 521 1663
Q6600-3.2 1933 6172
Q6600-2.4 1450 4629


Bonus point formula
Briefly summarizing our bonus formula, the bonus is applied for users who have a passkey, have successfully returned at least 10 bonus-eligible WU's, successfully return >=80% of assigned WU's, and return the WU before the preferred deadline. Bonus points do not apply to partial returns.

Our bonus formula calculates final points as follows:
final_points = base_points * max(1,sqrt(k*deadline_length/elapsed_time))
Note that the max(1,...) ensures that final_points are never lower than base_points.

We can convert this formula to points per day as follows:
ppd = base_ppd * speed_ratio * max(1,sqrt(x*speed_ratio)),
where speed_ratio is the machine speed relative to the Core i5, and x = k * deadline_length.

Parameter determination
If we set the new quad-core base ppd to 1024 and the parameter x to 30, we get the following results:

Machine: projected ppd:
E6600 903 (greater than A1, less than A2)
Q6600-3.2 6456 (greater than A2)
Q6600-2.4 4628 (approximately equal to A2)
P4 171 (on project 4442)
P4 228 (on project 6315)

Explanation of x parameter
We may vary the deadline length between projects (some projects require fast completion and thus have short deadlines). Each project has an associated k parameter that controls the bonus points yield. We standardize k as follows:
x * speed_ratio = k * deadline_length / elapsed_time
since we can express speed_ratio as Core_i5_time / elapsed_time:
x * Core_i5_time / elapsed time = k * deadline_length / elapsed_time
therefore:
x *Core_i5_time = k * deadline_length
solving for k, we obtain:
k = x * Core_i5_time / deadline_length
and since x is set to 30,
k = 30 * Core_i5_time / deadline_length, where Core_i5_time is the time to complete a work unit on our Core i5 benchmark machine.

Summary
According to our projections, this new benchmarking standard will result in points yield for a 2.8 GHz P4 that is slightly above the typical uniprocessor values, points yield for a 2.2 GHz E6600 that is greater than typical A1 core yields but less than typical A2 core yields, points yield for a 3.2 GHz Q6600 that is greater than typical A2 yields, and additional points yield rewards for faster systems. The crossover point between A3 and A2 ppd in speed falls approximately at a 2.4 GHz Q6600.

PS some users report substantially better points yield for certain dual-core machines on the first round of SMP2 projects. These are user data rather than what we use in our benchmarking calculations, but I have included them here because they may be of interest to many:

Core 2 Duo T9300 (45nm) 2.5Ghz: 2000 PPD on projects 6012, 6014, 6015
Core 2 Duo Xeon 3075 (65nm) 2.67 Ghz: 1950 PPD on projects 6012, 6014, 6015

评分

参与人数 1基本分 +5 收起 理由
BiscuiT + 5

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2010-1-28 11:31:16 | 显示全部楼层
PPD信息更新
6-core/12-thread native linux at 3.3GHz running bigadv WUs ---> 33,000 PPD
4-core/8-thread native linux at 3.8GHz running bigadv WUs ---> 27,700 PPD
4-core/8-thread Windows at 4.2GHz running A3 WUs ---> 20,500 PPD
6-core/8-thread Windows at 3.7GHz running A3 WUs ---> 18,600 PPD
目前官方已确认的bug是,A3最多只能开8线程。从上面的最后一行数据可见一斑。
回复

使用道具 举报

 楼主| 发表于 2010-2-7 23:25:03 | 显示全部楼层
February 06, 2010
Stats db migration in progress
We've talked about this for some time, but now's the time to start the migration to the new stats db hardware.  We are doing it now and everything looks ok so far.  We are keeping several safeguards in place in case there is a problem.

IF there is a problem with the stats, please bear with us.  There are several links we need to update and it's possible that a link is still pointing to the old db.  Also, in case of emergency, we are keeping track of all the new stats from this point in a special place, so even in the worst case scenario, we can just go back to the old db and input all the new stats into it.

So, the stats will be down for a bit and there may be some inconsistencies for a day or so while we get all the links updated.  The good news is that we'll have much faster stats soon, which will be great for all of us.


UPDATE 1
The migration is now done and it looks like everything is working.  We've tested out the stats pages and done a small manual stats update.  All looks good.  However, since stats are so important, before diving in and just putting everything back to automatic updates, I wanted to see if donors see any problems.  If you do, please report them in our forum (http://foldingforum.org).

UPDATE 2
It looks like everything has migrated well.  We have the stats back on normal updates and those updates are going fast (under 10 minutes).  With the new hardware, I bet we can make it even faster, but that's for later.  We have turned back on certain features we previously turned off (eg CPU counts).  We have more ambitious plans for the future, especially ideally getting to the point where the stats are never off line (even during updates), which is now possible with the new hardware.

评分

参与人数 1基本分 +5 收起 理由
BiscuiT + 5

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2010-2-8 09:44:06 | 显示全部楼层
February 07, 2010
Stats db update complete
We're done with the bulk of our initial update to new hardware.  We'll be doing some more work in the future to build up some additional capacity, namely hopefully getting to the point where the stats are never off line.  For now I think we're in good shape.  The stats are much faster than before, so we've turned back on a lot of the capabilities we previously turned off.  Also, stats update are taking about 5 minutes and now are limited not so much by db access than by other issues.

Moreover, we have now set the third party stats to update once an hour (instead of once every 3 hours).  It's set to update 10 minutes before the hour, every hour, so checking on the hour should be safe.

Note that the pages that are updated are: http://fah-web.stanford.edu/daily_user_summary.txt.bz2 http://fah-web.stanford.edu/daily_team_summary.txt.bz2 Please do not use scripts to access our main pages (i.e. anything with a cgi-bin in the url). We reserve the right to ban any IP that violates this rule, as it slows down the stats for everyone else.
大意:
统计服务器迁移完毕。现在如果频繁访问统计页面将会被封IP。

评分

参与人数 1基本分 +10 维基拼图 +3 收起 理由
BiscuiT + 10 + 3

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2010-2-22 20:08:49 | 显示全部楼层
February 10, 2010
Please do not access folding stats with scripts
Our main stats web server is being hit with a denial of service-like attack from several machines.  They are accessing cgi-bin urls multiple times per second per IP, which is slowing down the web server for everyone else.  We have banned some IPs, but we will look back and ban some more as needed.

Please stop running scripts -- it ruins the stats for everyone else.

UPDATE:  It seems like the DOS-ers are often going to the fahproject page, so we have deactivated it for now to keep the rest of the site up.  This seems to have helped a lot, coupled with some IP banning.
大意:
统计系统开始封IP了。

评分

参与人数 1基本分 +10 维基拼图 +2 收起 理由
BiscuiT + 10 + 2

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2010-2-22 20:12:57 | 显示全部楼层
February 19, 2010
Update on NV GPU servers
We have been working to track down the nasty bug on the NVIDIA GPU WS's that is causing problems for donors sending back WUs.  We have been trying different fixes over the last week, but this has been very tricky to figure out.

After another brainstorming session this afternoon, I think we have a good plan for the short term and long term.  I hope that new WUs being assigned won't see this problem due to rerouting of assignments.  Joe is also going to pound out the bugs on his new WS on vspg11a to get that going.

I'm very sorry for this major issue.  This has been called the worst outage we've had and I think we agree.  I've had a long chat with the development team about this and we've talked about how to fix issues in the WS code release cycle.  I think the plan we have in place will stop this from happening in the future, but the main issue right now is to solve the problems at hand.

UPDATE 6pm 2/19/2010 -- after a week of working on this, trying lots of stuff, and nothing working, I think we've found something promising.  I'm nervous typing this as everything looked promising before, but at least I think Joe's found the reason for the problem, which is the hard part.

UPDATE 11pm -- so far so good.  It looks like this fix may be sticking.

UPDATE 7:30am 2/20/2010 -- looks like the fix is indeed working.  We will continue to monitor the servers closely over the weekend.
大意:
NV GPU 服务器出了问题,导致计算结果无法上传。我们正在想办法从根本上解决这个问题。

评分

参与人数 1基本分 +10 维基拼图 +3 收起 理由
BiscuiT + 10 + 3

查看全部评分

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 新注册用户

本版积分规则

论坛官方淘宝店开业啦~
欢迎大家多多支持基金会~

Archiver|手机版|小黑屋|中国分布式计算总站 ( 沪ICP备05042587号 )

GMT+8, 2024-6-24 03:01

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表