2. PARALLEL COMPUTING :
Parallel computing refers to the process of breaking down larger
problems into smaller, independent, often similar parts that can be
executed simultaneously by multiple processors communicating
via shared memory, the results of which are combined upon
completion as part of an overall algorithm.
The primary goal of parallel computing is to increase available
computation power for faster application processing and problem
solving.
There are generally four types of parallel computing, available
from both proprietary and open source parallel computing vendors
bit-level parallelism, instruction-level parallelism, task
parallelism, or superword-level parallelism
3. DISTRIBUTED COMPUTING :
Distributed computing is a model in which components of a software system
are shared among multiple computers.
Even though the components are spread out across multiple computers, they are
run as one system.
This is done in order to improve efficiency and performance.
In a narrow form, distributed computing is limited to programs with
components shared among computers within a limited geographic area. Broader
definitions, however, include shared tasks as well as program components.
In the broadest sense of the term, distributed computing just means that
something is shared among multiple systems, which may also be in different
locations.
Distributed computing may also require a lot of tooling and soft skills.
4. Parallel Computing: Distributed Computing:
In parallel computing
multiple processors
performs multiple tasks
assigned to them
simultaneously.
Memory in parallel
systems can either be
shared or distributed.
Parallel computing
provides concurrency and
saves time and money.
In distributed computing we
have multiple autonomous
computers which seems to
the user as single system.
In distributed systems there
is no shared memory and
computers communicate
with each other through
message passing.
In distributed computing a
single task is divided among
different computers.
DIFFERENCE BETWEEN PARALLEL
AND DISTRIBUTED COMPUTING :
6. STEPS INVOLVED IN THE CONVERTION OF
DECIMAL TO BINARY :
Divide the number by 2.
Get the integer quotient for the next iteration.
Get the remainder for the binary digit.
Repeat the step until the quotient is equal to 0.