Lycos search: Connection Machine CM5 Thinking Machines

Load average: 3.01: Lycos Dec 05, 1994 catalog, 840,327 unique URLs (see Lycos News)

Printing only the first 20 of 7242 hits on words: connection, connectiondirection, connectiong, connectionism, connectionist, connectionists, connectionless, connectionmagazine, connectionnumber, connections, connections11, connectionsbetween, connectionshome, machine, machine1, machine2, machine3, machine4, machinecult, machined, machinedesign, machinegunners, machineguns, machinehead, machineinterface, machinelearning, machinem, machinename, machinenamen, machinenames, machinery, machines, machinescript, machineshops, cm5, cm5027, cm5067, cm5096, cm5206, cm5211, cm5234, cm5282, cm5292, cm5345, cm5a12, cm5a3,

ID714356: [score 1.0000]

date: 23-Nov-94
bytes: 718
links: 1

title: cm5-managers

outline: cm5-managers

keys: thinking machine machines

excerpt: cm5-managers Contact: (machine) (human) (J. Eric Townsend) Purpose: Discussion of administrating the Thinking Machines CM5 parallel supercomputer. To subscribe, send a message to with a *body* of "subscribe cm5-managers your_full_name". This mailing list is listed in the list of Publicly Accessible Mailing Lists - maintained by Stephanie da Silva.

ID767386: [score 0.9873]

date: 28-Nov-94
bytes: 2330
links: 10

title: Massively Parallel Processing

outline: Massively Parallel Processing

keys: machines connection thinking machine

excerpt: Massively Parallel Processing Two Thinking Machines Corporation Connection Machines, a 16K node CM-200, and a 256 node, vector-equipped CM-5E make up the major portion of the parallel computing resources of the group. The Center for Computational Sciences (CCS) is part of the National Consortium for High Performance Computing ( NCHPC ). The CCS provides research computing to government, industry, and academia. Contact Denise Yates for account information. In addition to providing computing services, the group uses its Connection Machines for its own research projects. The Global Ocean Prediction project is producing massively parallel versions of operational ocean prediction and weather forecast models. Here is an MPEG movie of a Connection Machine Simulation

ID811240: [score 0.9678]

date: 30-Nov-94
bytes: 1952
links: 5

title: UMIACS-TR-93-80

outline: CS-TR-3123, UMIACS-TR-93-80 University of Maryland Department of Computer Science and Department of Electrical Engineering, and

keys: machines thinking connection machine

excerpt: UMIACS-TR-93-80 CS-TR-3123, UMIACS-TR-93-80 Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields This paper introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov Random Field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. Use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented here enables
html abstract

ID286903: [score 0.9668] gopher://

date: 25-Nov-94
bytes: 3232

keys: thinking machines connection machine

excerpt: Overview of Wide Area Information Servers Brewster Kahle April 1991 The Wide Area Information Servers system is a set of products supplied by different vendors to help end-users find and retrieve information over networks. Thinking Machines, Apple Computer, and Dow Jones initially implemented such a system for use by business executives. These products are becoming more widely available from various companies. What does WAIS do? Users on different platforms can access personal, company, and published information from one interface. The information can be anything: text, pictures, voice, or formatted documents. Since a single computer-to-computer protocol is used, information can be stored anywhere on different types of machines. Anyone can use this system since

ID600254: [score 0.9619]

date: 25-Nov-94
bytes: 2525
links: 5

title: USING MPI

outline: Using MPI

keys: machines thinking connection machine

excerpt: USING MPI Using MPI Portable Parallel Programming with the Message-Passing Interface William Gropp, Ewing Lusk, and Anthony Skjellum The parallel programming community recently organized an effort to standardize the communication subroutine libraries used for programming on massively parallel computers such as the Connection Machine and Cray's new T3D, as well as networks of workstations. The standard they developed, Message-Passing Interface (MPI), not only unifies within a common framework programs written in a variety of existing (and currently incompatible) parallel languages but allows for future portability of programs between machines. Three of the authors of MPI have teamed up here to present a tutorial on how to use MPI to write parallel programs, particularly
Using MPI

ID807630: [score 0.9528]

date: 24-Nov-94
bytes: 3613
links: 2

title: SCD Computational Servers

outline: High Performance Computational Servers The CRAY Y-MP8/864 (Shavano)

keys: machines machine thinking

excerpt: SCD Computational Servers High Performance Computational Servers Computational servers in SCD's network include: * a CRAY Y-MP8/864 * a CRAY Y-MP2/216 * a four processor CRAY-3 * an eight node IBM SP-1 * an IBM RS/6000 Cluster, and * a 32 node Connection Machine (CM-5) These systems provide the computing power to run the large simulations required by our user base. The CRAY Y-MP8/864 (Shavano) Delivered in May, 1990, this supercomputer has eight processors, 64 million words (Mwords) of central memory, an internal speed of six nanoseconds (ns) per calculation, a 256-Mword Solid-state Storage Device (SSD), and 78 billion bytes (gigabytes) of disk storage. The CRAY Y-MP8/864 runs UNICOS, the UNIX-based operating system for Cray Research, Inc., computers. The machine
compute servers.
supercomputers and computing needs

ID756745: [score 0.9525]

date: 24-Nov-94
bytes: 13713

outline: NCSA CM-5 Overview Configuration Policies Accounting Training


excerpt: NCSA Connection Machine User Guide NCSA CM-5 Overview Configuration The Connection Machine Model 5 (CM-5) from Thinking Machines Corporation is a massively parallel, distributed memory system that supports both data parallel and message-passing programming. The I/O subsystem on the CM-5 includes a Scalable Disk Array (SDA) -- a parallel disk storage system connected directly to the CM-5 data network that provides high-speed disk I/O -- and a HIPPI interface that provides high-speed data transfer. NCSA's CM-5 has: *512 node processors *64-bit floating point and integer hardware *16-gigabytes (Gbytes) of memory *130-Gbyte Scalable Disk Array Each node consists of four vector units connected by a 64-bit bus to a SPARC CPU and a Network Interface chip. NCSA's CM
Overview of the CM-5

ID529318: [score 0.9272]

date: 27-Nov-94
bytes: 2888
links: 2

title: AHPCRC Research: Large-Scale Simulations

outline: AHPCRC Research Projects Large-Scale Simulations of Turbulent Geothermal Convection on a Network of Supercomputers

keys: thinking machines connections machine

excerpt: AHPCRC Research: Large-Scale Simulations AHPCRC Research Projects Large-Scale Simulations of Turbulent Geothermal Convection on a Network of Supercomputers The following simulation makes use of some of the tools which were developed by the Minnesota Supercomputer Center for the AHPCRC. This simulation executed across a HIPPI network and three architecturally dissimilar high performance computers as well as a high performance graphics workstation. This picture shows the output as seen on the workstations screen as this simulation is running. As the simulation runs, the window across the bottom of the screen displays the location of the data. A small icon is used to represent the computers involved. As each computer performs its calculations, its icon is highlighted
Turbulent Geothermal Convection

ID388189: [score 0.7947]

date: 24-Nov-94
bytes: 4540
links: 19

title: NCSA CM-5 Welcome Page

outline: General information Current status

keys: connection thinking machine

excerpt: NCSA CM-5 Welcome Page NCSA's CM-5 Please note: This web server, like so many others, is under construction. I am no where near ready to announce my presence to the world through What's New With NCSA Mosaic . Please send any suggestions about this web server to Thank you, and have fun! General information NCSA Mosaic may be used to view the NCSA Connection Machine User Guide (which is under revision) or the NCSA CM-5 FAQ . Additionally, on-line TMC documentation can be viewed with cmview. Further on-line documentation (and postscript files for much of what can be viewed with cmview) is stored on the CM-5 in the /usr/local/doc directory. For example, a partial list of (extra) software installed on the CM-5 is in /usr/local/doc/Software
NCSA CM-5 Information

ID533234: [score 0.7894]

date: 28-Nov-94
bytes: 4418
links: 28

title: Sydney Regional Centre for Parallel Computing

outline: Sydney Regional Centre for Parallel Computing

keys: thinking connection

excerpt: Sydney Regional Centre for Parallel Computing Welcome to the SRCPC Web server. This provides information relating to high performance computing. The SRCPC is hosted by the University of New South Wales . The next Introductory CM5 Programming Course will be held between the 5th-8th of December at UNSW. Application forms may be downloaded here. * Introduction to SRCPC's CM5 * Local Hints * Introductory CM5 Course * Connection Machine CM5 Technical Summary * Latest Usage Stats for the CM5 CMSSL for C* Version 3.2 has numerous changes from V3....
AU University of N.S.W., Sydney Regional Centre for Parallel Computing
Sydney Regional Center for Parallel Computing

ID411658: [score 0.7651]
Thinking Machines' CM5

ID650849: [score 0.7523]

date: 03-Dec-94
bytes: 4835


excerpt: From Wed Mar 23 09:15:06 EST 1994 Subject: Nesl: a parallel functional language A full implementation of the NESL language and environment is now available via anonymous FTP. NESL is a fine-grained, functional, nested data-parallel language. The current implementation runs on workstations, the Connection Machines CM2 and CM5, the Cray Y-MP and the MasPar MP2. NESL is loosely based on ML. It includes a built-in parallel data-type, sequences, and parallel operations on sequences (the element type of a sequence can be any type, not just scalars). It is based on eager evaluation, and supports polymorphism, type inference and a limited use of higher-order functions. Currently it does not have support for modules and its datatype definition is limited
nesl implementation

ID411653: [score 0.7519]

date: 29-Nov-94
bytes: 4715
links: 21

title: AIMS Home Page

outline: An Automated Instrumentation and Monitoring System Examples of usage


excerpt: AIMS Home Page An Automated Instrumentation and Monitoring System Detailed information about AIMS can be obtained by clicking on parts of the following slide AIMS consists of a suite of software tools for measurement and analysis of performance; it includes * xinstrument : a source-code instrumentor that supports Fortran77 and C message-passing programs written under three communication libraries: NX, CMMD, and PVM; * monitor : a library of timestamping and trace-collection routines that run on Intel's iPSC/860 and Paragon , Thinking Machines' CM5 , as well as networks of workstations (including Convex Cluster, SparcStations, and SGIs connected by a LAN); * tpp : a utility for removing monitoring overhead and its effects on the communication patterns as recorded

ID133763: [score 0.5468]

date: 26-Nov-94
bytes: 1376

keys: connection connections machines machine

excerpt: A Compile Time Model for Composing Parallel Programs Susan Hinrichs April, 1994 CMU-CS-94-108 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Many distributed memory machines support connection-based communication instead of or in addition to connection-less message passing. Connection-based communication can be more efficient than message passing because the resources are reserved once for the connection and multiple messages can be sent over the connection. While long-lived connections enable more efficient use of the communication system in some situations, managing connection resources adds another level of complexity to programming such machines. iWarp is an example of a distributed memory machine that supports long-lived

ID739073: [score 0.5211]

date: 24-Nov-94
bytes: 4703
links: 17

title: Video

outline: CU-SeeMe Video conferencing experiments

keys: machines connections

excerpt: Video CU-SeeMe Video conferencing experiments * CU-SeeMe stuff (useful information) * NASA Select TV live. * Remote controlled pan & tilt video camera. Screendump from a CU-SeeMe session. All three participants are exchanging video through a reflector at NYSERNet, Liverpool New York. From time to time I will be transmitting live video from our home. The transmissions are made from a Macintosh using the Cu-SeeMe package from Cornell. You can watch these transmissions from any color Macintosh (with a direct IP Internet connection) running CU-SeeMe by connecting to the reflector machine ( at the MultiMedia Lab in our Computer Science Department (Østfold Regional College, Halden, Norway). Other reflector can be found on the list
Cu-See-Me stuff
CU-SeeMe reflector sites etc.
CU-SeeMe Video-Conferencing Experiments
live video
Omtale og eksempler av CU-SeeMe
Video conferencing

ID161607: [score 0.5187] gopher://

date: 28-Nov-94
bytes: 3044
links: 19

keys: connection machinery machines machine

excerpt: Select one of: * Software * ACM SIGGRAPH Online Bibliography Project * Association for Computing Machinery (ACM) gopher * Computer jargon dictionary (search) * Connection Machines's FORTRAN manual (search) * DFN-CERT Security Archive * Functional Programming Abstracts (search) * Guide for finding source code (search) * High Performance Computing Newswire (HP...
Computer Science
North Carolina State University Library gopher

ID294287: [score 0.5118] gopher://

date: 24-Nov-94
bytes: 2521

keys: machinename machine machines connection

excerpt: Transferring Files to and from Another Machine Using FTP -------------------------------------------------------- FTP stands for File Transfer Protocol. It can be used to transfer files across the network between any machines running FTP. FTP is typically used in one of two ways: (1) to transfer files between accounts on two different machines, and (2) to obtain files made available for public distribution by other machines. If you want to transfer files between accounts, say between your account and that of a colleague at another university, you must know the machine name, account and password of your colleague's account. Then start FTP like this: ftp machinename where "machinename" is the full network name of your colleague's machine (probably something like

ID754830: [score 0.4431]

date: 04-Dec-94
bytes: 2995
links: 9

title: Sorting for Particle Flow Simulation on the Connection Machine

outline: Sorting for Particle Flow Simulation on the Connection Machine Abstract

keys: connection machines machine

excerpt: Sorting for Particle Flow Simulation on the Connection Machine by Leonardo Dagum RNR Technical Report RNR-90-017 October,1990 Abstract This paper investigates the sorting requirements of a particle simulation and analyzes the sorting algorithms currently in use on sequential, vector, and data parallel implementations of particle flow simulations. Particle simulation requires sorting n integers in the range [1, O(n)] and takes O(n) running time on sequential or vector machines. The data parallel implementation of a particle simulation is shown to be non--optimal with running time O(n \log n). Until recently, there have been no optimal parallel integer sorting algorithms. This paper presents an optimal deterministic algorithm for parallel sorting in a particle

ID756628: [score 0.4408]

date: 27-Nov-94
bytes: 3611
links: 2

outline: Acquisitions and Upgrade Paths

keys: machine machines thinking

excerpt: NCSA Scalable Metacomputing Strategy, April 1994 Acquisitions and Upgrade Paths SGI An SGI Power Challenge has been purchased and will be available by May 1994. This machine will initially have 32 R4400 processors (150 MHz, 75 Mflop peak), 2 Gbytes of shared memory, and 80 Gbytes of RAID disk. In July 1994, this machine will be upgraded to 16 TFP superscalar processors, each running at 75 MHz and 300 Mflop peak speed. This machine will be binary compatible (for single processor applications) with R4400-based SGI workstations. In addition, NCSA will experiment this summer with a Houston-based SGI Challenge Array consisting of four networked Challenge machines with 20 processors each. This machine will be used to evaluate both high performance science and engineering
Acquisitions and Upgrade Path
Convex Computer Corporation
Silicon Graphics Inc.
Thinking Machines Corporation

ID683514: [score 0.4407]

date: 29-Nov-94
bytes: 8891
links: 7

title: MPEG Movie Archive

outline: Latest News

keys: machine machines

excerpt: MPEG Movie Archive Latest News October 26, 1994 Machines within our university now have unlimited access to the archive. For machines within the .nl domain the R-rated section has been opened again. Questions regarding access for machines outside the Netherlands will not be answered. If all goes well and the traffic doesn't increase drastically, I will be opening the R-rated section of the archive during the weekend for all machines soon. October 3, 1994 A sad day in the history of the MPEG archive..... . Today I had to close the R-rated section of the archive because the machine it is running on (Sun SPARCstation 10 with 32 MB of memory and 100 MB swap !!) was getting into memory problems. The main reason for that is that each connection occupies about 200 Kbytes
Hot News

back to the Lycos Home Page.

Lycos 0.9beta4 06-Dec-94 / 8-Dec-94 /