Loren Data's SAM Daily™

fbodaily.com
Home Today's SAM Search Archives Numbered Notes CBD Archives Subscribe
FBO DAILY ISSUE OF APRIL 23, 2010 FBO #3072
MODIFICATION

70 -- Modification to Synopsis/Solicitation

Notice Date
4/21/2010
 
Notice Type
Modification/Amendment
 
NAICS
334111 — Electronic Computer Manufacturing
 
Contracting Office
N00178 NAVAL SURFACE WARFARE CENTER Dahlgren Division 17362 Dahlgren Road Suite 157 Dahlgren, VA
 
ZIP Code
00000
 
Solicitation Number
N0017810R1024
 
Response Due
4/30/2010
 
Archive Date
6/30/2010
 
Point of Contact
Linda Wilkes 540-653-7081 Linda Wilkes,Voice: 540-653-7081,Fax: 540-653-7088
 
Small Business Set-Aside
N/A
 
Description
1. Combined Synopsis/Solicitation Amendment 1 is issued to answer questions, extend the closing date and anticipated award date. 2. The closing date is changed to April 30, 2010 with an anticipated award date of May 14, 2010. 3. The following provides responses to questions submitted in writing with respect to solicitation N00178-10-R-1024: Q. 1. There were multiple questions requesting additional information for a back up system. A. 1. A 16 TB mass storage will not be used to back up user data. The system being procured is for periodic snapshots of user data and applications. Precious data will be handled by a different backup system. Q. 2. Will systems other than HP and SGI be acceptable? A. 2. No, NSWC has invested time and funds into the Department of Energy CTH and Sierra Multiphysics Codes to run properly on the HPC and SGI specific operating systems. The purchase of a machine with a different collection of Linux OS with proprietary modifications would likely require an additional investment of time and money to get the CTH and Sierra to work. There is no guarantee that the codes will ever work correctly on a new configuration of Linux and proprietary modifications provided with a non-SGI or HP cluster. Q. 3. Is the current cluster environment Intel or AMD? HP or SGI? Will they share resources? A. 3. The current cluster is not Intel or AMD it is both HP and SGI. The cluster will not share resources. Q. 4. Do you have a preference for support or unsupported (open source) Queuing systems, ie PBSpro, SGE?? A. 4. The preference is for LSF because the Department Of Energy Sierra Multiphysics Code suite is designed to work with LSF. Q. 5. How many users will access the new cluster? A. 5. Approximately 10 users maximum will be logged into the system at any one time, usually less. Q. 6. Is there a preference for Red Hat or SUSE? A. 6. A Redhat-base OS is preferred. Q. 7. Is this a sole source to add on to legacy clusters or a entirely new resource that will be acquired? A. 7. This is an entirely new resource that will be acquired. However, Computer codes developed by DOE are the primary analysis tools used by NSWCDD for physics-based modeling and simulation. These codes consist of several tens of programs and software libraries that must all work together as well as with the Linux cluster operating system and the high-speed interconnect hardware through which the computer nodes communicate. In addition HP and SGI have proprietary modifications to the Linux operating system which significantly increase the speed of CTH, Sierra and ALE3D. Past experience with other Linux cluster manufacturers have shown there is no guarantee that DOE software will run correctly on other architectures. Q. 8. Is the end user willing to review AMD Magny Cours 12 core CPU's as a processor option. I feel Magny Cours will offer the best flop per dollar currently compared to Intels Nahalem 5500 series? A. 8. Yes. Q. 9. What type of network is this cluster going to be built on. Infiniband, Connect X, 100/1000? A. 9. A fast interconnect for the computations and an ethernet for housekeeping. Regarding the fast interconnect, we have had a great deal of success with Infiniband on our existing Linux clusters. Q. 10. Is the end user interested in Fermi GPU for CUDA compute? A. 10. No. This would be inappropriate for CTH and Sierra Multiphysics codes. Q. 11. Will the 16TB be NSF mounted or require a filesystem? A. 11. File system. Q. 12. Would the end user be interested in using ScaleMP to create a SMP from Standard machines? A. 12 No, as the processor limitation on ScaleMP is too low. Q. 13. Please clearly define the salient specifications required to supply the government with a sufficient response. A. 13. The specification supplied with the synopsis was written so that a vendor having familiarity with Linux clusters used in a massively parallel mode for scientific computing (that's with MPI and low-latency interconnects) and not a server farm would have the flexibility to suggest a solution based on the best available technology. Q. 14. Can you please confirm that "2) User I/O capabilities with workstation level graphics (e.g. nVidia Quadro), high resolution LCD display, keyboard and mouse" does not refer to each node in the cluster, but rather to just one user interface node? A. 14. This LCD display, keyboard and mouse is used for system administration, resides in the rack with all the nodes and communicates directly with the head node. Only one is required. Q. 15. Is the requirement for a stand alone Graphics workstation or the ability to add graphics to one of the cluster nodes? A. 15. This requirement was meant to describe inclusion of a high performance graphics card in the head node. One or two users who are remotely logged in may be running visualization software on the cluster. The graphics card in the head node should be able to provide high quality graphics to the remote terminals. Q. 16. Is the Portland Compiler a requirement or can Intel Compilers be provided? A. 16. Either Portland or Intel are acceptable. Q. 17. Low Latency interconnect - Which is desired Gige, 10 Gige, IB or something else? If Infiniband is a suitable option then QDR or DDR? A. 17. Preference is Infiniband with QDR. Q. 18. With respect to the 16TB of storage, is the requirement to have this shared outside of the cluster? A. 18. This storage is not shared outside the cluster. Q. 19. High quality power distribution - This is not a UPS correct? A. 19. This does not refer to a UPS. 4. All other terms and conditions of the solicitation remain in full force and effect.
 
Web Link
FBO.gov Permalink
(https://www.fbo.gov/spg/DON/NAVSEA/N00178/N0017810R1024/listing.html)
 
Record
SN02128414-W 20100423/100421235054-54c72cd84f3f82aaaacd3ec3827c9ec8 (fbodaily.com)
 
Source
FedBizOpps Link to This Notice
(may not be valid after Archive Date)

FSG Index  |  This Issue's Index  |  Today's FBO Daily Index Page |
ECGrid: EDI VAN Interconnect ECGridOS: EDI Web Services Interconnect API Government Data Publications CBDDisk Subscribers
 Privacy Policy  Jenny in Wanderland!  © 1994-2024, Loren Data Corp.