SlideShare a Scribd company logo
1 of 7
PRESENTED BY
TAMILARASAN. S
PARALLEL AND DISTRIBUTED
COMPUTING5
PARALLEL COMPUTING :
 Parallel computing refers to the process of breaking down larger
problems into smaller, independent, often similar parts that can be
executed simultaneously by multiple processors communicating
via shared memory, the results of which are combined upon
completion as part of an overall algorithm.
 The primary goal of parallel computing is to increase available
computation power for faster application processing and problem
solving.
 There are generally four types of parallel computing, available
from both proprietary and open source parallel computing vendors
bit-level parallelism, instruction-level parallelism, task
parallelism, or superword-level parallelism
DISTRIBUTED COMPUTING :
 Distributed computing is a model in which components of a software system
are shared among multiple computers.
 Even though the components are spread out across multiple computers, they are
run as one system.
 This is done in order to improve efficiency and performance.
 In a narrow form, distributed computing is limited to programs with
components shared among computers within a limited geographic area. Broader
definitions, however, include shared tasks as well as program components.
 In the broadest sense of the term, distributed computing just means that
something is shared among multiple systems, which may also be in different
locations.
 Distributed computing may also require a lot of tooling and soft skills.
Parallel Computing: Distributed Computing:
 In parallel computing
multiple processors
performs multiple tasks
assigned to them
simultaneously.
 Memory in parallel
systems can either be
shared or distributed.
 Parallel computing
provides concurrency and
saves time and money.
 In distributed computing we
have multiple autonomous
computers which seems to
the user as single system.
 In distributed systems there
is no shared memory and
computers communicate
with each other through
message passing.
 In distributed computing a
single task is divided among
different computers.
DIFFERENCE BETWEEN PARALLEL
AND DISTRIBUTED COMPUTING :
FLOWCHAT VIEW OF PARALLEL AND DISTRIBUTED
COMPUTING
STEPS INVOLVED IN THE CONVERTION OF
DECIMAL TO BINARY :
 Divide the number by 2.
 Get the integer quotient for the next iteration.
 Get the remainder for the binary digit.
 Repeat the step until the quotient is equal to 0.
THANK YOU

More Related Content

Similar to Power phdhhdoint presentation-(20ucs055 ).pptx

Lecture 2 more about parallel computing
Lecture 2   more about parallel computingLecture 2   more about parallel computing
Lecture 2 more about parallel computingVajira Thambawita
 
Distributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using OpenshmemDistributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using OpenshmemIJERA Editor
 
Distributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using OpenshmemDistributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using OpenshmemIJERA Editor
 
Computing notes
Computing notesComputing notes
Computing notesthenraju24
 
Distributed Computing Report
Distributed Computing ReportDistributed Computing Report
Distributed Computing ReportIIT Kharagpur
 
01-MessagePassingFundamentals.ppt
01-MessagePassingFundamentals.ppt01-MessagePassingFundamentals.ppt
01-MessagePassingFundamentals.pptHarshitPal37
 
parallel programming models
 parallel programming models parallel programming models
parallel programming modelsSwetha S
 
Cloud and distributed computing, advantages
Cloud and distributed computing, advantagesCloud and distributed computing, advantages
Cloud and distributed computing, advantagesknowledgeworld7
 
Parallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptxParallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptxkrnaween
 
Distributed system Tanenbaum chapter 1,2,3,4 notes
Distributed system Tanenbaum chapter 1,2,3,4 notes Distributed system Tanenbaum chapter 1,2,3,4 notes
Distributed system Tanenbaum chapter 1,2,3,4 notes SAhammedShakil
 
Performance evaluation of larger matrices over cluster of four nodes using mpi
Performance evaluation of larger matrices over cluster of four nodes using mpiPerformance evaluation of larger matrices over cluster of four nodes using mpi
Performance evaluation of larger matrices over cluster of four nodes using mpieSAT Journals
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management Shashank Asthana
 

Similar to Power phdhhdoint presentation-(20ucs055 ).pptx (20)

Lecture 2 more about parallel computing
Lecture 2   more about parallel computingLecture 2   more about parallel computing
Lecture 2 more about parallel computing
 
Distributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using OpenshmemDistributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using Openshmem
 
Distributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using OpenshmemDistributed Shared Memory – A Survey and Implementation Using Openshmem
Distributed Shared Memory – A Survey and Implementation Using Openshmem
 
Computing notes
Computing notesComputing notes
Computing notes
 
Distributed Systems.pptx
Distributed Systems.pptxDistributed Systems.pptx
Distributed Systems.pptx
 
CLOUD COMPUTING Unit-I.pdf
CLOUD COMPUTING Unit-I.pdfCLOUD COMPUTING Unit-I.pdf
CLOUD COMPUTING Unit-I.pdf
 
Distributed Computing Report
Distributed Computing ReportDistributed Computing Report
Distributed Computing Report
 
01-MessagePassingFundamentals.ppt
01-MessagePassingFundamentals.ppt01-MessagePassingFundamentals.ppt
01-MessagePassingFundamentals.ppt
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 
parallel programming models
 parallel programming models parallel programming models
parallel programming models
 
Cloud and distributed computing, advantages
Cloud and distributed computing, advantagesCloud and distributed computing, advantages
Cloud and distributed computing, advantages
 
Chapter One.ppt
Chapter One.pptChapter One.ppt
Chapter One.ppt
 
Cluster Computing
Cluster ComputingCluster Computing
Cluster Computing
 
Chap 1(one) general introduction
Chap 1(one)  general introductionChap 1(one)  general introduction
Chap 1(one) general introduction
 
unit 1.pptx
unit 1.pptxunit 1.pptx
unit 1.pptx
 
Parallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptxParallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptx
 
Parallel Computing
Parallel ComputingParallel Computing
Parallel Computing
 
Distributed system Tanenbaum chapter 1,2,3,4 notes
Distributed system Tanenbaum chapter 1,2,3,4 notes Distributed system Tanenbaum chapter 1,2,3,4 notes
Distributed system Tanenbaum chapter 1,2,3,4 notes
 
Performance evaluation of larger matrices over cluster of four nodes using mpi
Performance evaluation of larger matrices over cluster of four nodes using mpiPerformance evaluation of larger matrices over cluster of four nodes using mpi
Performance evaluation of larger matrices over cluster of four nodes using mpi
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 

Power phdhhdoint presentation-(20ucs055 ).pptx

  • 1. PRESENTED BY TAMILARASAN. S PARALLEL AND DISTRIBUTED COMPUTING5
  • 2. PARALLEL COMPUTING :  Parallel computing refers to the process of breaking down larger problems into smaller, independent, often similar parts that can be executed simultaneously by multiple processors communicating via shared memory, the results of which are combined upon completion as part of an overall algorithm.  The primary goal of parallel computing is to increase available computation power for faster application processing and problem solving.  There are generally four types of parallel computing, available from both proprietary and open source parallel computing vendors bit-level parallelism, instruction-level parallelism, task parallelism, or superword-level parallelism
  • 3. DISTRIBUTED COMPUTING :  Distributed computing is a model in which components of a software system are shared among multiple computers.  Even though the components are spread out across multiple computers, they are run as one system.  This is done in order to improve efficiency and performance.  In a narrow form, distributed computing is limited to programs with components shared among computers within a limited geographic area. Broader definitions, however, include shared tasks as well as program components.  In the broadest sense of the term, distributed computing just means that something is shared among multiple systems, which may also be in different locations.  Distributed computing may also require a lot of tooling and soft skills.
  • 4. Parallel Computing: Distributed Computing:  In parallel computing multiple processors performs multiple tasks assigned to them simultaneously.  Memory in parallel systems can either be shared or distributed.  Parallel computing provides concurrency and saves time and money.  In distributed computing we have multiple autonomous computers which seems to the user as single system.  In distributed systems there is no shared memory and computers communicate with each other through message passing.  In distributed computing a single task is divided among different computers. DIFFERENCE BETWEEN PARALLEL AND DISTRIBUTED COMPUTING :
  • 5. FLOWCHAT VIEW OF PARALLEL AND DISTRIBUTED COMPUTING
  • 6. STEPS INVOLVED IN THE CONVERTION OF DECIMAL TO BINARY :  Divide the number by 2.  Get the integer quotient for the next iteration.  Get the remainder for the binary digit.  Repeat the step until the quotient is equal to 0.