early 1990s or making computer power as easy to access as an electric power grid. • The ideas of the grid were brought together by Ian Foster, Carl Kesselman. • Initially Globus Toolkit was designed incoorporating storage management, security provisioning, data movement, monitoring e.t.c. • In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing
and apportion pieces of a program among several computers • It can also be thought of as form of network-distributed parallel processing. • It can be small confined to a network of computer workstations or it can be a large, public collaboration across many companies or networks. • Applications can be executed by specifying the requirements, rather than identifying the individual resources to be used.
computers to a single problem at the same time • It allows flexible resource sharing among geographically distributed computing resources in multiple administrative domains • It is basically used for Grid applications rich in graphics and multimedia. • Controlled shell and controlled desktop mechanisms are used to restrict the user to execute only authorized commands and applications
special type of parallel computing that relies on complete computers ( with on board CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the internet) • supercomputer, has many processors connected by a local high- speed computer bus. which, when combined, can produce a similar computing resource as multiprocessor supercomputer, but at a lower cost. • Primary advantage of distributed computing is that each
various processors and local storage areas do not have high-speed connections • The high-end scalability of grids is generally favorable, due to the low need for connectivity between nodes • It is costly and difficult to write programs that can be run in the environment of a supercomputer
computing resources belonging to multiple indivisuals • Computers which are actually Performing the calculations might not be entirely trustworthy. • Measures were introduced to prevent participants from producing misleading results, and from using the system as an attack vector. • Measures include assigning work randomly to different nodes and checking that at least two different nodes report the same answer for a given work unit.
that nodes will not drop out of the network at random times. • Uing different platforms with many languages, leads to tradeoff between investment in software development and the number of platforms that can be supported • Cross platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node
two perspectives need to be considered: the provider side and the user side: The Provider Side • The overall Grid market comprises several specific markets. • Like Grid middleware market, • The market for Grid-enabled applications, • The utility computing market, and the software-as-a-Service (SaaS) market.
installed and integrated into the existing infrastructure of the involved companies • Major Grid middlewares are Globus Toolkit, gLite, and UNICORE • Major players in the utility computing market are Sun Microsystems, IBM, and HP. • SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage.
side of the Grid computing market, the different segments have significant implications for their IT deployment strategy. CPU scavenging • CPU-scavenging, creates a "grid" from the unused resources in a network of participants • It saves instruction cycles that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day MARKET SEGMENTATION :-
responsible for the management of jobs. • Allocation of resources needed for any specific job. • partitioning of jobs to schedule parallel execution of tasks, data management • The jobs submitted to Grid Computing schedulers are evaluated based on their service-level requirements • Rescheduling and corrective actions of partial failover situations
between the service requester and the service provider. • This pairing enables the selection of best available resources from the service provider for the execution of a specific task • In general cases, the resource broker may select the suitable scheduler for the resource execution task • It uses the resource information in pairing process
among the resources in a Grid Computing environment. • Integrated to avoid processing delays and over commitment of resources. • These kinds of applications can be built in connection with schedulers and resource managers. • This level of load balancing involves partitioning of jobs, identifying the resources, and queuing of the jobs
the grid resources. For example, capabilities for Grid Computing resource authentication, remote resource access, scheduling capabilities, and monitoring status information • Integrated Solutions combination of the existing advanced middleware and application functionalities, combined to provide more coherent and high performance
sharing computing power. • Members join Our Grid by downloading a lightweight client which runs tasks on their computer . • The tasks may be part of an application submitted by any Our Grid member. • Our Grid members can not choose which application to donate their spare computing power • The client code runs the tasks in a sandbox , which isolates the tasks from the rest of the computer. • It is designed to work for up to 10 000
a high- end desktop in place of server Do I need more than one server How much will a server cost? Will I have to replace it in six month? How much memory and disk space will it needed? How do you know when you need a server? How is a server different from a desktop?
computer had similar processor speeds, memory and storage capacity compared to a server, it still isn't a replacement for a real server. The technologies behind them are engineered for different purposes ◦ A desktop computer system typically runs a user-friendly operating systemand desktop applications to facilitate desktop- oriented tasks. ◦ A server manages all network resources. Servers are often dedicated (meaning it performs no other task besides server tasks).
you will need to make include the following: 1. Form Factor: For small businesses, the best choice is a dedicated entry-level server in a tower configuration. 2. Processor: Choose a server-specific processor to boost performance and data throughput. 3. Memory: Buy as much memory as you can afford and look for expansion slots for future upgrades. 4. Storage: Look for SATA or SCSI hard disks, not IDE.
high-end PC can often work and function as a server in a pinch, especially for certain roles such as file serving, there are several reasons a dedicated server makes a better long-term investment. Some of a dedicated server's key advantages over a high-end PC include: • Reliability and Performance • Scalability • Security • Long-term Cost Savings Redundancy is particularly important for storage, where RAID (redundant array of inexpensive disks) is typically utilized to keep a server up and running.
Scalability: While a high-end personal computer may meet the existing needs of a smaller business, there's a high probability that it won't be able to keep up as the company expands and its network needs increase. ◦ Server Security: Security can be implemented more efficiently and effectively with servers These advantages include increased reliability and performance, scalability, security, reduced administration and lower total cost of ownership.
ensure you purchase a server with enough power. The number of servers required depends on how much server processing power you need to support your number of users and applications you run. • To ensure your server meets your business needs you must first understand server processing power and know exactly how you want to use your server. • There are many different types of servers, such as a file, print or database server. These are the most common types of servers a small business will need to invest in.
Server: At a certain point, it makes sense to relieve the burden of an individual user's computer by having a dedicated server that is capable of providing file storage and print services –
devices to exist on their own separate network and communicate directly with each other over very fast media • Storage Area Networks (SANs) are most commonly implemented using a technology called Fibre channel. • Fibre Channel is a set of communication standards that supports very fast data rates. • Devices on the Storage Area Network are normally connected together through a special kind of switch, called a Fibre Channel switch that acts as a connectivity point for the devices.
a high-speed subnetwork of shared storage devices. • A Storage Area Network is a high-speed sub network of shared storage devices. • A SAN's architecture works in a way that makes all storage devices available to all servers on a LAN or WAN. • A Storage Area Network can be anything from two servers on a network accessing a central pool of storage devices to several thousand servers accessing many millions of megabytes of storage.
costs are: • - Cost of hardware • - Cost of server operating system and applications • - Cost to administer Virtual servers are servers that are hosted online and managed by a hosting company. The ongoing costs associated with hosted servers are not exactly inexpensive either, but that's a topic for another time.
and scalability are two of its biggest selling points, the downside to this is that it makes it extremely difficult to provide a simple, precise answer to the question of how much memory and storage space will be needed when implementing a new server. Each case needs to be researched based on a number of factors, most notably: • How will the server be used- How many users will the server need to accommodate, both now and in the future • The types of demands users will be placing on the server, both now and in the future
systems is designed to protect a user and a host system from a potentially malicious third party • Authentication It identifies each entity and ensure that no third party is involved • Authorization Ensures that the user is allowed to use the remote Grid resources • Automation Used to implement many of the design goals of Grid computing, such as single sign-on, virtual organizations, and interaction among multiple administrative domains.
provider to keep his data and activities secret and not interfere; and the resource provider trusts the user not to act maliciously • Mutual authentication ensures that the source of any request is really the user and that the user is issuing the requests to the correct resource provider • Security of the mutual authentication process depends on the secrecy of the private keys belonging to the user, resource, and any certificate authorities.
of partial trust is when the user trusts the resource provider, but the resource provider does not necessarily trust the user. • Usually the local user accounts are appropriately restricted by the operating system. This makes the operating system the first line of defense . • The owner of the system may want to restrict Grid users further than other local users on the host. • The controlled shell can be designed with a fail-safe mechanism such that if some potential intrusion or modification is detected, the user would be locked out
is when the user may also distrust the resource provider. This scenario can arise in many ways • The owners of the resource may themselves be malicious and intentionally use their access to the host to violate the protection the external user expects. • The user may desire some form of confirmation that his data has not been compromise as well as confirmation that it has been deleted with no copies kept after the job is completed.
solve Grand Challenge problems Earthquake simulation Climate/weather modeling Financial modeling • There is a well-known project called distributed.net, which was started in 1997 and has run a number of successful projects in its history. • Another well-known project is the World Community Grid, its mission is to create the largest public computing grid that benefits humanity.
Intelligence) @Home project, in which PC users worldwide donate unused processor cycles to help the search for signs of extraterrestrial life by analyzing signals coming from outer space. The project relies on individual users to volunteer to allow the project to harness the unused processing power of the user's computer. This method saves the project both money and resources. • The Folding@home project administered by the Pande Group. The research includes the way proteins take certain shapes, called folds, and how that relates to what proteins do