This document provides an overview of augmented reality (AR), including definitions, comparisons to virtual reality, how AR works, and examples of applications. It defines AR as enhancing the real-world environment with computer-generated input like graphics or sound. Key points made include that AR combines real and virtual elements in real-time, requires devices like displays and sensors, uses image registration to align real and virtual images, and has applications in medicine, military, education and games. Challenges discussed are performance issues, interaction difficulties, and alignment problems.
powerpoint presentation on sixth sense TechnologyJawhar Ali
The document discusses the Sixth Sense technology, which aims to connect the physical and digital world without hardware devices through an additional "sixth sense". It provides a brief history, outlines the key components including a camera and projector, and describes how the technology works by recognizing gestures with computer vision techniques. A range of applications are presented, from drawing and mapping to getting flight information. Related technologies like augmented reality, gesture recognition, and computer vision are also discussed. Finally, advantages like portability and connecting the real/digital world are highlighted, alongside disadvantages such as battery life.
Gestures are an important form of non-verbal communication between humans and can also be used to create interfaces between humans and machines. There are several types of gestures including emblems, sign languages, gesticulation and pantomimes. Gesture recognition allows humans to interact with computers through motions of the body, especially hand movements. Some methods of gesture recognition include device-based techniques using sensors on gloves, vision-based techniques using cameras, and controller-based techniques using motion controllers. Gesture recognition has applications in areas such as virtual controllers, sign language translation, game interaction and robotic assistance.
Gesture based computing uses gestures as a form of human-computer interaction. It can be used to replace mice and keyboards by allowing users to navigate interfaces and interact with 3D environments through gestures detected by cameras. Common technologies for gesture recognition include depth cameras, controllers, and single visible light cameras. Gestures can be used for applications in entertainment, gaming, communications for disabled individuals, and as an alternative computer interface.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2014-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit.
The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices, we’ll increasingly be able to interact with and control our devices simply by signaling with our fingers, gesturing with our hands, and moving our bodies.
This presentation explains how gestures fit into the spectrum of advanced user interface options, compares and contrasts the various 2-D and 3-D technologies (vision and other) available to implement gesture interfaces, gives examples of the various gestures (and means of discerning them) currently in use by systems manufacturers, and forecasts how the gesture interface market may evolve in the future.
Sixth Sense is a wearable gestural interface developed by Pranav Mistry that augments the physical world with digital information. It uses a camera to track hand gestures which are interpreted by a mobile device and used to interact with information projected onto surfaces. The system recognizes objects and displays related data, allowing users to access information about their environment through natural hand motions. It has applications like accessing product details, taking pictures, and viewing maps. The prototype demonstrates the potential of Sixth Sense to serve as a transparent user interface for ubiquitous access to online information using gestures.
The document discusses gesture recognition technology. It describes how cameras can read human body movements and communicate that data to computers to interpret gestures. Gestures can be used as inputs to control devices or applications. The document outlines different types of gestures, image processing techniques used, input devices like gloves and cameras, challenges, and potential uses like sign language recognition and immersive gaming.
This document summarizes a presentation on gesture recognition technology. It discusses the introduction of gesture recognition, the types of gestures, uses of gesture recognition including sign language recognition and virtual controllers. It also discusses input devices such as wired gloves and depth cameras. The document outlines algorithms for gesture recognition including 3D model-based, skeletal-based, and appearance-based algorithms. It concludes with discussion of challenges for gesture recognition including limitations of equipment and variations in recognition accuracy.
The document discusses Sixth Sense, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera tracks hand gestures to access digital information, which is processed on a smartphone and projected back using the mirror. Some applications include making calls, taking pictures, checking the time, and accessing flight updates using natural hand gestures. In conclusion, Sixth Sense allows accessing digital information about the environment automatically and interacting with it using gestures in a transparent interface.
powerpoint presentation on sixth sense TechnologyJawhar Ali
The document discusses the Sixth Sense technology, which aims to connect the physical and digital world without hardware devices through an additional "sixth sense". It provides a brief history, outlines the key components including a camera and projector, and describes how the technology works by recognizing gestures with computer vision techniques. A range of applications are presented, from drawing and mapping to getting flight information. Related technologies like augmented reality, gesture recognition, and computer vision are also discussed. Finally, advantages like portability and connecting the real/digital world are highlighted, alongside disadvantages such as battery life.
Gestures are an important form of non-verbal communication between humans and can also be used to create interfaces between humans and machines. There are several types of gestures including emblems, sign languages, gesticulation and pantomimes. Gesture recognition allows humans to interact with computers through motions of the body, especially hand movements. Some methods of gesture recognition include device-based techniques using sensors on gloves, vision-based techniques using cameras, and controller-based techniques using motion controllers. Gesture recognition has applications in areas such as virtual controllers, sign language translation, game interaction and robotic assistance.
Gesture based computing uses gestures as a form of human-computer interaction. It can be used to replace mice and keyboards by allowing users to navigate interfaces and interact with 3D environments through gestures detected by cameras. Common technologies for gesture recognition include depth cameras, controllers, and single visible light cameras. Gestures can be used for applications in entertainment, gaming, communications for disabled individuals, and as an alternative computer interface.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2014-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit.
The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices, we’ll increasingly be able to interact with and control our devices simply by signaling with our fingers, gesturing with our hands, and moving our bodies.
This presentation explains how gestures fit into the spectrum of advanced user interface options, compares and contrasts the various 2-D and 3-D technologies (vision and other) available to implement gesture interfaces, gives examples of the various gestures (and means of discerning them) currently in use by systems manufacturers, and forecasts how the gesture interface market may evolve in the future.
Sixth Sense is a wearable gestural interface developed by Pranav Mistry that augments the physical world with digital information. It uses a camera to track hand gestures which are interpreted by a mobile device and used to interact with information projected onto surfaces. The system recognizes objects and displays related data, allowing users to access information about their environment through natural hand motions. It has applications like accessing product details, taking pictures, and viewing maps. The prototype demonstrates the potential of Sixth Sense to serve as a transparent user interface for ubiquitous access to online information using gestures.
The document discusses gesture recognition technology. It describes how cameras can read human body movements and communicate that data to computers to interpret gestures. Gestures can be used as inputs to control devices or applications. The document outlines different types of gestures, image processing techniques used, input devices like gloves and cameras, challenges, and potential uses like sign language recognition and immersive gaming.
This document summarizes a presentation on gesture recognition technology. It discusses the introduction of gesture recognition, the types of gestures, uses of gesture recognition including sign language recognition and virtual controllers. It also discusses input devices such as wired gloves and depth cameras. The document outlines algorithms for gesture recognition including 3D model-based, skeletal-based, and appearance-based algorithms. It concludes with discussion of challenges for gesture recognition including limitations of equipment and variations in recognition accuracy.
The document discusses Sixth Sense, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera tracks hand gestures to access digital information, which is processed on a smartphone and projected back using the mirror. Some applications include making calls, taking pictures, checking the time, and accessing flight updates using natural hand gestures. In conclusion, Sixth Sense allows accessing digital information about the environment automatically and interacting with it using gestures in a transparent interface.
This document discusses gesture recognition. It defines a gesture as a form of non-verbal communication using bodily movements. The document then provides examples of gestures and discusses how gesture recognition works by using computer vision and image processing techniques. It outlines different types of gestures including hand gestures, sign language, and gestures detected using electrical fields. The document discusses advantages such as more natural human-computer interaction and disadvantages including issues with ambient light and object detection. It concludes by discussing future trends in gesture recognition technology.
Gesture recognition is a topic in computer science and language technology which interpret human gestures via mathematical algorithms.
Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally without any mechanical devices.
The document discusses hand gesture recognition. It defines what gestures are and how gesture recognition works by interpreting human gestures through mathematical algorithms. This allows humans to interact with machines naturally without devices. Examples of applications include controlling a smart TV with hand movements and using gestures for gaming. The document outlines the hardware and software needed for gesture recognition, including a webcam, processor, RAM, and operating system. It also provides an overview of the module structure involved in identifying and applying gestures as inputs.
The Sixth Sense is the Basic Latest Technology. It is the a wearable gestural interface that augments the physical world around us with digital information
This document provides an introduction and overview of hand gesture recognition. It discusses what gestures are, how gesture recognition works to interpret human body language and enable natural human-computer interaction. It outlines the key modules involved, including image transformation techniques like frame extraction, blurring and color thresholding. Example hand gestures and applications are shown, along with the overall data flow and required hardware and software components.
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
Augmented reality (AR) combines the real world with virtual elements. AR works by tracking a user's position, computing virtual objects, and displaying them overlaid on the real world through a head-mounted display. The key components of AR are head-mounted displays, tracking systems, and mobile computing power. AR has applications in medicine for surgical planning, education for enhancing lessons, the military for enhancing situational awareness, games for blending virtual and real worlds, and maintenance/construction for overlaying instructions. While tracking and size limitations currently exist, AR has huge potential to blur real and virtual experiences.
The recent advent of novel sensing and display technologies has encouraged the development of a variety of multi-touch and gesture based interactive systems.
In these systems user may interact directly with information using touch add natural hand gestures.
Unfortunately, most gestural and multi-touch based interactive systems are not mobile and small mobile devices fail to provide the intuitive experience of full-sized gestural systems.
We present VSP-Virtual Smart Phone, a multi-touch and gesture based interaction system
The document describes a virtual smart phone (VSP) system that uses augmented reality and gesture recognition to allow users to interact with digital information and devices without needing a physical phone. The VSP system superimposes digital layers of information onto the real world using projectors and sensors. It enables hands-free calling and data transfer between users and devices through authentication methods like palm recognition. The VSP aims to make communication and interaction with digital technologies more natural and intuitive.
IRJET- Enhanced Look Based Media Player with Hand Gesture RecognitionIRJET Journal
The document describes a proposed enhanced media player that uses face detection and hand gesture recognition to control playback. Specifically, it will:
1. Continuously monitor the user's face using a webcam and only play the video when the user is looking at the screen, pausing otherwise.
2. Detect hand gestures like raising a hand to increase volume, decrease volume, switch to the next video, or previous video.
3. The system is intended to provide a better media playback experience by automating control and preventing the user from missing parts of a video if they look away. Both face detection and hand gesture recognition are implemented using computer vision algorithms like HAAR cascades.
Gesture recognition technology allows for control of devices through hand and body motions. It works by using cameras, sensors and algorithms to interpret gestures and movements. Key applications include controlling smart TVs with hand motions, sign language translation, and assisting disabled individuals. Challenges include variations between individuals, reading motions accurately due to lighting and noise, and lack of standardized gesture languages.
Gesture recognition allows humans to interface with computers using bodily movements, especially hand gestures. The system first acquires an image, preprocesses it through steps like segmentation and filtering, then extracts features using edge detection. It matches the extracted features to a database of signatures for known gestures. The system was tested on 25 basic American sign language gestures and achieved 98.6% accuracy in recognizing 493 out of 500 gestures. Challenges include inconsistent lighting and background noise.
This document discusses gesture recognition. It begins by introducing gesture recognition and its evolution from graphical user interfaces using mice and keyboards. It then defines different types of gestures including iconic, deictic, metaphoric, and beat gestures. The document outlines the basic working of a gesture recognition system and different types of gesture sensing technologies like hand gesture recognition, facial gesture recognition, sign language recognition, and vision-based techniques. It discusses input devices used for gesture tracking and various applications of gesture recognition like socially assistive robotics, sign language translation, virtual controllers, and remote control. Finally, it addresses challenges in gesture recognition like lack of a universal gesture language and issues with robustness.
SixthSense is a wearable gesture-based device developed by Pranav Mistry that allows users to interact with information projected onto physical surfaces using natural hand gestures. It consists of a pocket projector, webcam, and mirror worn around the neck. The webcam tracks color markers on the user's fingers to recognize gestures and interact with projected interfaces for applications like making calls, viewing maps, checking the time, and obtaining information about books, products, and flights by scanning them. The system uses computer vision algorithms and augmented reality to seamlessly integrate digital information into the physical world in an intuitive way.
Hand gesture recognition system for human computer interaction using contour ...eSAT Journals
This document describes a hand gesture recognition system that allows users to control computer operations using hand gestures captured by a webcam. The system involves four main phases: 1) image acquisition using a webcam, 2) image pre-processing to extract the hand and reduce noise, 3) feature extraction by detecting hand contours, and 4) gesture recognition by comparing contour features to stored templates and assigning computer commands. The system was able to recognize various gestures like opening programs or pressing keys with an average recognition rate of 95%. Future work could involve reducing constraints on the user environment and allowing both hands to perform more operations.
The document describes SixthSense, a wearable gesture-based device that allows users to interact with information projected onto physical surfaces using natural hand gestures. The SixthSense prototype consists of a pocket projector, mirror, and webcam hung around the user's neck. It uses computer vision techniques to track colored markers on the user's fingers to recognize gestures and project augmented information about objects, maps, time, etc. onto nearby surfaces. The system aims to seamlessly integrate digital information into everyday physical environments and interactions through an open source, cost-effective, and user-friendly gesture interface.
Gesture recognition technology uses mathematical algorithms to interpret human gestures and enable interaction with machines without physical devices. It has various applications including sign language recognition, interpreting facial expressions, and electrical field sensing of body proximity. Vision-based and device-based techniques use cameras, gloves, or other sensors to detect gestures. Challenges include varying lighting and background items that can reduce accuracy. The future potential is vast across entertainment, home automation, education, medicine and security.
Hand gesture analysis approaches can be categorized as either glove-based or vision-based. Glove-based approaches use sensors attached to a glove to detect finger flexion, while vision-based approaches use cameras and 3D modeling to analyze images of the hand. There are challenges to both approaches related to sensor accuracy, viewpoint changes, and processing requirements. Existing literature analyzes taxonomy of gestures and different algorithms for gesture recognition including HMM, KNN, and SVM. Applications of gesture recognition include virtual controllers, sign language translation, and assistive robotics. Key difficulties are handling variability in appearances and balancing performance, cost, and user independence.
The document discusses augmented reality, including its definition as a live view of the physical real-world environment that is augmented by computer-generated perceptual information. It provides details on the history of augmented reality, how augmented reality systems work, examples of applications in various fields such as military, medical, education and entertainment, and the future potential of augmented reality. Limitations including tracking accuracy and reliance on mobile devices are also noted.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.[1] It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.
This document discusses gesture recognition. It defines a gesture as a form of non-verbal communication using bodily movements. The document then provides examples of gestures and discusses how gesture recognition works by using computer vision and image processing techniques. It outlines different types of gestures including hand gestures, sign language, and gestures detected using electrical fields. The document discusses advantages such as more natural human-computer interaction and disadvantages including issues with ambient light and object detection. It concludes by discussing future trends in gesture recognition technology.
Gesture recognition is a topic in computer science and language technology which interpret human gestures via mathematical algorithms.
Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally without any mechanical devices.
The document discusses hand gesture recognition. It defines what gestures are and how gesture recognition works by interpreting human gestures through mathematical algorithms. This allows humans to interact with machines naturally without devices. Examples of applications include controlling a smart TV with hand movements and using gestures for gaming. The document outlines the hardware and software needed for gesture recognition, including a webcam, processor, RAM, and operating system. It also provides an overview of the module structure involved in identifying and applying gestures as inputs.
The Sixth Sense is the Basic Latest Technology. It is the a wearable gestural interface that augments the physical world around us with digital information
This document provides an introduction and overview of hand gesture recognition. It discusses what gestures are, how gesture recognition works to interpret human body language and enable natural human-computer interaction. It outlines the key modules involved, including image transformation techniques like frame extraction, blurring and color thresholding. Example hand gestures and applications are shown, along with the overall data flow and required hardware and software components.
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
Augmented reality (AR) combines the real world with virtual elements. AR works by tracking a user's position, computing virtual objects, and displaying them overlaid on the real world through a head-mounted display. The key components of AR are head-mounted displays, tracking systems, and mobile computing power. AR has applications in medicine for surgical planning, education for enhancing lessons, the military for enhancing situational awareness, games for blending virtual and real worlds, and maintenance/construction for overlaying instructions. While tracking and size limitations currently exist, AR has huge potential to blur real and virtual experiences.
The recent advent of novel sensing and display technologies has encouraged the development of a variety of multi-touch and gesture based interactive systems.
In these systems user may interact directly with information using touch add natural hand gestures.
Unfortunately, most gestural and multi-touch based interactive systems are not mobile and small mobile devices fail to provide the intuitive experience of full-sized gestural systems.
We present VSP-Virtual Smart Phone, a multi-touch and gesture based interaction system
The document describes a virtual smart phone (VSP) system that uses augmented reality and gesture recognition to allow users to interact with digital information and devices without needing a physical phone. The VSP system superimposes digital layers of information onto the real world using projectors and sensors. It enables hands-free calling and data transfer between users and devices through authentication methods like palm recognition. The VSP aims to make communication and interaction with digital technologies more natural and intuitive.
IRJET- Enhanced Look Based Media Player with Hand Gesture RecognitionIRJET Journal
The document describes a proposed enhanced media player that uses face detection and hand gesture recognition to control playback. Specifically, it will:
1. Continuously monitor the user's face using a webcam and only play the video when the user is looking at the screen, pausing otherwise.
2. Detect hand gestures like raising a hand to increase volume, decrease volume, switch to the next video, or previous video.
3. The system is intended to provide a better media playback experience by automating control and preventing the user from missing parts of a video if they look away. Both face detection and hand gesture recognition are implemented using computer vision algorithms like HAAR cascades.
Gesture recognition technology allows for control of devices through hand and body motions. It works by using cameras, sensors and algorithms to interpret gestures and movements. Key applications include controlling smart TVs with hand motions, sign language translation, and assisting disabled individuals. Challenges include variations between individuals, reading motions accurately due to lighting and noise, and lack of standardized gesture languages.
Gesture recognition allows humans to interface with computers using bodily movements, especially hand gestures. The system first acquires an image, preprocesses it through steps like segmentation and filtering, then extracts features using edge detection. It matches the extracted features to a database of signatures for known gestures. The system was tested on 25 basic American sign language gestures and achieved 98.6% accuracy in recognizing 493 out of 500 gestures. Challenges include inconsistent lighting and background noise.
This document discusses gesture recognition. It begins by introducing gesture recognition and its evolution from graphical user interfaces using mice and keyboards. It then defines different types of gestures including iconic, deictic, metaphoric, and beat gestures. The document outlines the basic working of a gesture recognition system and different types of gesture sensing technologies like hand gesture recognition, facial gesture recognition, sign language recognition, and vision-based techniques. It discusses input devices used for gesture tracking and various applications of gesture recognition like socially assistive robotics, sign language translation, virtual controllers, and remote control. Finally, it addresses challenges in gesture recognition like lack of a universal gesture language and issues with robustness.
SixthSense is a wearable gesture-based device developed by Pranav Mistry that allows users to interact with information projected onto physical surfaces using natural hand gestures. It consists of a pocket projector, webcam, and mirror worn around the neck. The webcam tracks color markers on the user's fingers to recognize gestures and interact with projected interfaces for applications like making calls, viewing maps, checking the time, and obtaining information about books, products, and flights by scanning them. The system uses computer vision algorithms and augmented reality to seamlessly integrate digital information into the physical world in an intuitive way.
Hand gesture recognition system for human computer interaction using contour ...eSAT Journals
This document describes a hand gesture recognition system that allows users to control computer operations using hand gestures captured by a webcam. The system involves four main phases: 1) image acquisition using a webcam, 2) image pre-processing to extract the hand and reduce noise, 3) feature extraction by detecting hand contours, and 4) gesture recognition by comparing contour features to stored templates and assigning computer commands. The system was able to recognize various gestures like opening programs or pressing keys with an average recognition rate of 95%. Future work could involve reducing constraints on the user environment and allowing both hands to perform more operations.
The document describes SixthSense, a wearable gesture-based device that allows users to interact with information projected onto physical surfaces using natural hand gestures. The SixthSense prototype consists of a pocket projector, mirror, and webcam hung around the user's neck. It uses computer vision techniques to track colored markers on the user's fingers to recognize gestures and project augmented information about objects, maps, time, etc. onto nearby surfaces. The system aims to seamlessly integrate digital information into everyday physical environments and interactions through an open source, cost-effective, and user-friendly gesture interface.
Gesture recognition technology uses mathematical algorithms to interpret human gestures and enable interaction with machines without physical devices. It has various applications including sign language recognition, interpreting facial expressions, and electrical field sensing of body proximity. Vision-based and device-based techniques use cameras, gloves, or other sensors to detect gestures. Challenges include varying lighting and background items that can reduce accuracy. The future potential is vast across entertainment, home automation, education, medicine and security.
Hand gesture analysis approaches can be categorized as either glove-based or vision-based. Glove-based approaches use sensors attached to a glove to detect finger flexion, while vision-based approaches use cameras and 3D modeling to analyze images of the hand. There are challenges to both approaches related to sensor accuracy, viewpoint changes, and processing requirements. Existing literature analyzes taxonomy of gestures and different algorithms for gesture recognition including HMM, KNN, and SVM. Applications of gesture recognition include virtual controllers, sign language translation, and assistive robotics. Key difficulties are handling variability in appearances and balancing performance, cost, and user independence.
The document discusses augmented reality, including its definition as a live view of the physical real-world environment that is augmented by computer-generated perceptual information. It provides details on the history of augmented reality, how augmented reality systems work, examples of applications in various fields such as military, medical, education and entertainment, and the future potential of augmented reality. Limitations including tracking accuracy and reliance on mobile devices are also noted.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.[1] It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.
The document discusses augmented reality (AR) including:
1. An introduction to AR, how it overlays computer generated content onto the real world in real-time.
2. The differences between AR, virtual reality (VR), and mixed reality (MR) according to Paul Milgram's Reality-Virtuality Continuum.
3. The typical architecture of AR systems including tracker units, processing units, and visual units that can be video-see through or optical-see through.
4. The application areas of AR including education, medical, and entertainment.
Yogesh Baisla's seminar presentation provided an overview of augmented reality (AR). AR superimposes digitally rendered images onto the real world using markers recognized by mobile apps. The seminar discussed the history of AR from the 1960s, how it works technically, main applications like medical, manufacturing, and entertainment. It also compared AR to virtual reality, described implementation frameworks using off-the-shelf hardware and software, reviewed advantages like increased knowledge but also disadvantages like privacy issues. The seminar concluded AR has potential to enhance our lives but also faces challenges like technological limitations and social acceptance.
This document provides an overview of a technical seminar on augmented reality technology. It begins with an introduction that defines augmented reality as overlaying computer graphics on the real world. It then discusses the differences between augmented reality and virtual reality. The document outlines the types of augmented reality and how the technology works by recognizing markers to render 3D objects. It highlights advantages such as shared experiences and improved education. Finally, it lists some applications including medical, entertainment, military training and more.
This document discusses augmented reality (AR), which superimposes digitally rendered images onto the real world. It provides a brief history of AR dating back to the 1960s and defines AR as augmenting the real world rather than replacing it like virtual reality. The document outlines how AR works using markers and mobile apps to overlay virtual images. It lists medical, manufacturing, entertainment and military as key application areas and discusses challenges like performance issues, user interaction limitations, and achieving social acceptance. In conclusion, AR has potential to enhance daily life by displaying additional contextual information.
This document provides an overview of augmented reality (AR), including its definition, evolution, components, implementation methods, applications, and future possibilities. AR enhances the real-world environment by overlaying digital content and information. The key components of an AR system are displays, tracking systems, and mobile computing power. Implementation can be done via markers, markerless recognition, or location-based methods. Applications include medical, education, military, tourism and more. The future of AR may include replacing cell phones and expanding computer screens into the real world.
The document discusses augmented reality (AR), including its history dating back to the 1960s, how it works by superimposing digital images onto the real world using markers recognized by smartphone cameras, and its applications in healthcare, military, manufacturing, and entertainment. Some advantages of AR are increasing knowledge and enabling shared experiences over long distances, while disadvantages include potential security and user experience issues.
This document provides an overview of augmented reality (AR), including its components, technologies, applications, and limitations. AR combines real and virtual elements to enhance one's current perception of reality. Key aspects covered include: AR uses displays, tracking systems, and environmental sensors to overlay virtual images on the real world in real-time; common display types are head-worn, hand-held, and projection; video and optical see-through are the main AR technologies; applications exist in medical, military, engineering, consumer and other fields; and limitations relate to tracking accuracy, computing power and size of AR systems. The future of AR is presented as expanding computer screens into the real environment through virtual overlays.
presentation for augmented reality. ,It consists of introduction, working, components of AR, applications, limitations, recent development and conclusion. all the best for your presentation
Augmented reality (AR) supplements reality by adding computer-generated perceptual information, like sound, video, graphics or GPS data to enhance the user's real-world experience. AR is achieved using devices like head-mounted displays or smartphone screens that overlay virtual objects on the real world. AR systems generate composite views by combining real scenes with virtual objects using tracking and computer vision. Examples of AR applications include medical visualization, military training, navigation aids, education, and entertainment experiences.
This document provides an overview of augmented reality including:
- The key differences between augmented reality and virtual reality, with augmented reality overlaying digital elements on the real world while maintaining a sense of the real environment.
- The main components of an augmented reality system including head-mounted displays, tracking systems like GPS, and mobile computing power.
- How video and optical see-through displays work to merge real and virtual scenes.
- Potential applications of augmented reality and some performance issues that need to be addressed.
AN Introduction to Augmented Reality(AR)Jai Sipani
Augmented reality (AR) involves overlaying computer-generated information on top of the real world. This document discusses AR systems, which combine real and virtual data in real-time using displays, tracking systems, and mobile computing. Example AR applications include Wikitude, Google Glass, and Pokemon Go. The document also outlines some key components of AR systems like head-mounted displays, tracking orientation, and challenges like tracking accuracy and limited mobile computing power. Overall, the document provides an overview of AR technology, examples, components, applications, and current limitations.
This document discusses augmented reality (AR), which combines real and virtual elements. It defines AR and outlines its components, including head-mounted displays, tracking systems, and mobile computing. Examples of AR applications in education, military, and gaming are provided. The key challenges of AR are accurate tracking and orientation. The conclusion states that AR will continue to blend real and virtual experiences.
This document discusses augmented reality (AR) and its potential applications. AR overlays digital information onto a live video feed, providing additional context. It works by identifying markers in the video and overlaying associated data. The document outlines several potential uses for AR, including training employees, digital advertising, understanding complex systems, and navigation. It also describes the hardware, software, tracking methods, and APIs needed to develop AR applications. Key applications mentioned include mobile advertising, supporting complex tasks, and industrial uses.
The document discusses augmented reality (AR) and its potential applications. It begins by defining AR as enhancing one's current perception of reality by overlaying digital information. The technology aims to seamlessly blend virtual objects with the real world by tracking a user's movements and positioning graphics accordingly. Some key points:
- AR is still in the early research phase but may become widely available by the next decade in the form of glasses.
- It has applications in education, gaming, military, and more by providing contextual information about one's surroundings.
- The main components of an AR system are head-mounted displays, tracking systems, and mobile computing power.
- There are two main types of head-mounted
Augmented reality and virtual reality technologyAMAN148668
This document presents an industrial training presentation on augmented reality. It begins with defining augmented reality as enhancing the real world with computer-generated information using software, apps and hardware like AR glasses. It then discusses why AR was introduced, such as for interactive learning experiences. The document outlines what AR is, how it works by superimposing digital information onto the real world, its current uses and applications. It also covers the impact, future potential in areas like education, gaming and more, as well as limitations and why continued research is important.
Similar to Augmented Reality - the emerging technology (20)
Better Builder Magazine brings together premium product manufactures and leading builders to create better differentiated homes and buildings that use less energy, save water and reduce our impact on the environment. The magazine is published four times a year.
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Covid Management System Project Report.pdfKamal Acharya
CoVID-19 sprang up in Wuhan China in November 2019 and was declared a pandemic by the in January 2020 World Health Organization (WHO). Like the Spanish flu of 1918 that claimed millions of lives, the COVID-19 has caused the demise of thousands with China, Italy, Spain, USA and India having the highest statistics on infection and mortality rates. Regardless of existing sophisticated technologies and medical science, the spread has continued to surge high. With this COVID-19 Management System, organizations can respond virtually to the COVID-19 pandemic and protect, educate and care for citizens in the community in a quick and effective manner. This comprehensive solution not only helps in containing the virus but also proactively empowers both citizens and care providers to minimize the spread of the virus through targeted strategies and education.
An In-Depth Exploration of Natural Language Processing: Evolution, Applicatio...DharmaBanothu
Natural language processing (NLP) has
recently garnered significant interest for the
computational representation and analysis of human
language. Its applications span multiple domains such
as machine translation, email spam detection,
information extraction, summarization, healthcare,
and question answering. This paper first delineates
four phases by examining various levels of NLP and
components of Natural Language Generation,
followed by a review of the history and progression of
NLP. Subsequently, we delve into the current state of
the art by presenting diverse NLP applications,
contemporary trends, and challenges. Finally, we
discuss some available datasets, models, and
evaluation metrics in NLP.
Sachpazis_Consolidation Settlement Calculation Program-The Python Code and th...Dr.Costas Sachpazis
Consolidation Settlement Calculation Program-The Python Code
By Professor Dr. Costas Sachpazis, Civil Engineer & Geologist
This program calculates the consolidation settlement for a foundation based on soil layer properties and foundation data. It allows users to input multiple soil layers and foundation characteristics to determine the total settlement.
This study Examines the Effectiveness of Talent Procurement through the Imple...DharmaBanothu
In the world with high technology and fast
forward mindset recruiters are walking/showing interest
towards E-Recruitment. Present most of the HRs of
many companies are choosing E-Recruitment as the best
choice for recruitment. E-Recruitment is being done
through many online platforms like Linkedin, Naukri,
Instagram , Facebook etc. Now with high technology E-
Recruitment has gone through next level by using
Artificial Intelligence too.
Key Words : Talent Management, Talent Acquisition , E-
Recruitment , Artificial Intelligence Introduction
Effectiveness of Talent Acquisition through E-
Recruitment in this topic we will discuss about 4important
and interlinked topics which are
Data Communication and Computer Networks Management System Project Report.pdfKamal Acharya
Networking is a telecommunications network that allows computers to exchange data. In
computer networks, networked computing devices pass data to each other along data
connections. Data is transferred in the form of packets. The connections between nodes are
established using either cable media or wireless media.
We have designed & manufacture the Lubi Valves LBF series type of Butterfly Valves for General Utility Water applications as well as for HVAC applications.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
2. Contents
Introduction
What is AR?
Augmented Reality vs. Virtual Reality
How is it used?
How does it work?
Implementation Framework
Applications
Advantages
Disadvantages
Issues in Augmented Reality
3. Augmented reality (AR) is a
term for a live direct or an
indirect view of a physical,
real-world environment
whose elements
are augmented by computer
generated sensory input,
such as sound or graphics.
INTRODUCTION
4. What’s the difference ???
Augmented Reality
System augments the real world
scene
User maintains a sense of
presence in real world
Needs a mechanism to combine
virtual and real worlds
Hard to register real and virtual
Virtual Reality
Totally immersive environment
Senses are under control of
system
Need a mechanism to feed virtual
world to user
Hard to make VR world
interesting
5.
6. Nuts and Bolts of AR
Characteristics of Augmented Reality:
• it combines real and virtual
• it is interactive in real time
• it is registered in 3D.
Hardware needed for Augmented
Reality
• display
• tracking (e.g. GPS, compass)
• input devices (eg Camera)
• sensors (eg accelerometer)
• processer (eg CPU)
7. Image Registration
Registration – the process of alignment of real and synthetic stimuli
Image registration – process of identification of key features in an
image of physical world so that computer generated graphics can
be properly aligned
Image registration is computer vision task
Can be very difficult in the general case
Visual AR markers can be used to help
Requirements for immersive AR
• High precision
• Real time
10. AR HANDHELD DISPLAYS
Small handheld computing device
Uses global positioning systems (GPS)
11. AR SPATIAL DISPLAYS
Nothing to wear and/or carry
Uses digital projectors to display information
Marker-based and markerless devices
Ex. Smart projectors
14. Limitations
Spam and Security
take some generations to make Augmented Reality feasible
as a general interface technique accessible to the public.
Content may obscure and/or narrow a user’s interests or
tastes
Privacy control will become a big issue. Walking up to a
stranger or a group of people might reveal status, Tweets,
and information that may cause breaches of privacy.
15. Issues in Augmented Reality
Performance Issues
Real time processing of images can be a challenge and often can
slow down augmented reality systems.
Interaction Issues
Users within a mixed environment because of augmented reality
have difficulties interacting with the environment as normal.
Alignment Issues
People working in an augmented reality are more sensitive to
alignment errors. Proper calibration and alignment with the
reference frame of the world is crucial
16. Looking into Future
Expanding a PC screen to real world environment. Program
windows and icons appear as virtual devices in real space & eye
and gesture operated, by gazing or pointing.
Eg.Microsoft Hololens
Enhanced media application like pseudo holographic virtual
screens, virtual surround cinema.
Replacement of cell phones : eye dialing, insertion of information
directly into environment.
Framed pictures in a row
(Intermediate)
To reproduce the picture effects on this slide, do the following:
On the Home tab, in the Slides group, click Layout and then click Blank.
On the Insert tab, in the Images group, click Picture.
In the Insert Picture dialog box, select a picture and then click Insert.
Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Format Picture dialog box, resize or crop the image so that the height is set to 2” and the width is set to 2.67”.
To crop the picture, click Crop in the left pane, and in the Crop pane, under Crop position, enter values into the Height, Width, Left, and Top boxes.
To resize the picture, click Size in the left pane, and in the right pane, under Size and rotate, enter values into the Height and Width boxes.
On the slide, select the picture. Under Picture Tools, on the Format tab, in the Picture Styles group, click Quick Styles, and then click Reflected Bevel, Black (fifth row, fifth option from the left).
Select the picture. On the Home tab, in the Clipboard group, click the arrow to the right of Copy, and then click Duplicate.
Right-click the duplicate picture, and then click Change Picture. In the Insert Picture dialog box, select a picture and click Insert.
If the inserted picture is a different height and width, adjust the width and height. Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Size and Position dialog box, on the Size tab, resize or crop the picture as needed so that under Size and rotate, the Height box is set to 2” and the Width box is set to 2.67”. Resize the picture under Size and rotate by entering values into the Height and Width boxes. Crop the picture under Crop from by entering values into the Left, Right, Top, and Bottom boxes.
Repeat the process in steps 6-8 to create another duplicate picture, for a total of three pictures.
Drag the pictures to form a row across the slide.
Press and hold SHIFT and select all three pictures. On the Home tab, in the Drawing group, click Arrange, and then do the following:
Point to Align, and then click Align to Slide.
Point to Align, and then click Align Middle.
Point to Align, and then click Distribute Horizontally.
Click Group.
On the Home tab, in the Drawing group, click Shape Effects, point to 3-D Rotation, and then under Perspective click Perspective Contrasting Left (second row, fourth option from the left).
Drag the group to position it above the middle of the slide and also position it slightly to the left.
To reproduce the background effects on this slide, do the following:
Right-click the slide background area, and then click Format Background. In the Format Background dialog box, click Fill in the left pane, select Gradient fill in the Fill pane, and then do the following:
In the Type list, select Linear.
Click the button next to Direction, and then click Linear Down (first row, second option from the left).
In the Angle text box, enter 90⁰.
Under Gradient stops, click Add gradient stops or Remove gradient stops until two stops appear in the slider.
Also under Gradient stops, customize the gradient stops that you added as follows:
Select the first stop in the slider, and then do the following:
In the Position box, enter 50%.
Click the button next to Color, and then under Theme Colors click Black, Text 1, Lighter 5% (sixth row, second option from the left).
Select the second stop in the slider, and then do the following:
In the Position box, enter 100%.
Click the button next to Color, and then under Theme Colors click Dark Blue, Text 2, Lighter 40% (fourth row, fourth option from the left).
Framed pictures in a row
(Intermediate)
To reproduce the picture effects on this slide, do the following:
On the Home tab, in the Slides group, click Layout and then click Blank.
On the Insert tab, in the Images group, click Picture.
In the Insert Picture dialog box, select a picture and then click Insert.
Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Format Picture dialog box, resize or crop the image so that the height is set to 2” and the width is set to 2.67”.
To crop the picture, click Crop in the left pane, and in the Crop pane, under Crop position, enter values into the Height, Width, Left, and Top boxes.
To resize the picture, click Size in the left pane, and in the right pane, under Size and rotate, enter values into the Height and Width boxes.
On the slide, select the picture. Under Picture Tools, on the Format tab, in the Picture Styles group, click Quick Styles, and then click Reflected Bevel, Black (fifth row, fifth option from the left).
Select the picture. On the Home tab, in the Clipboard group, click the arrow to the right of Copy, and then click Duplicate.
Right-click the duplicate picture, and then click Change Picture. In the Insert Picture dialog box, select a picture and click Insert.
If the inserted picture is a different height and width, adjust the width and height. Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Size and Position dialog box, on the Size tab, resize or crop the picture as needed so that under Size and rotate, the Height box is set to 2” and the Width box is set to 2.67”. Resize the picture under Size and rotate by entering values into the Height and Width boxes. Crop the picture under Crop from by entering values into the Left, Right, Top, and Bottom boxes.
Repeat the process in steps 6-8 to create another duplicate picture, for a total of three pictures.
Drag the pictures to form a row across the slide.
Press and hold SHIFT and select all three pictures. On the Home tab, in the Drawing group, click Arrange, and then do the following:
Point to Align, and then click Align to Slide.
Point to Align, and then click Align Middle.
Point to Align, and then click Distribute Horizontally.
Click Group.
On the Home tab, in the Drawing group, click Shape Effects, point to 3-D Rotation, and then under Perspective click Perspective Contrasting Left (second row, fourth option from the left).
Drag the group to position it above the middle of the slide and also position it slightly to the left.
To reproduce the background effects on this slide, do the following:
Right-click the slide background area, and then click Format Background. In the Format Background dialog box, click Fill in the left pane, select Gradient fill in the Fill pane, and then do the following:
In the Type list, select Linear.
Click the button next to Direction, and then click Linear Down (first row, second option from the left).
In the Angle text box, enter 90⁰.
Under Gradient stops, click Add gradient stops or Remove gradient stops until two stops appear in the slider.
Also under Gradient stops, customize the gradient stops that you added as follows:
Select the first stop in the slider, and then do the following:
In the Position box, enter 50%.
Click the button next to Color, and then under Theme Colors click Black, Text 1, Lighter 5% (sixth row, second option from the left).
Select the second stop in the slider, and then do the following:
In the Position box, enter 100%.
Click the button next to Color, and then under Theme Colors click Dark Blue, Text 2, Lighter 40% (fourth row, fourth option from the left).
Framed pictures in a row
(Intermediate)
To reproduce the picture effects on this slide, do the following:
On the Home tab, in the Slides group, click Layout and then click Blank.
On the Insert tab, in the Images group, click Picture.
In the Insert Picture dialog box, select a picture and then click Insert.
Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Format Picture dialog box, resize or crop the image so that the height is set to 2” and the width is set to 2.67”.
To crop the picture, click Crop in the left pane, and in the Crop pane, under Crop position, enter values into the Height, Width, Left, and Top boxes.
To resize the picture, click Size in the left pane, and in the right pane, under Size and rotate, enter values into the Height and Width boxes.
On the slide, select the picture. Under Picture Tools, on the Format tab, in the Picture Styles group, click Quick Styles, and then click Reflected Bevel, Black (fifth row, fifth option from the left).
Select the picture. On the Home tab, in the Clipboard group, click the arrow to the right of Copy, and then click Duplicate.
Right-click the duplicate picture, and then click Change Picture. In the Insert Picture dialog box, select a picture and click Insert.
If the inserted picture is a different height and width, adjust the width and height. Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Size and Position dialog box, on the Size tab, resize or crop the picture as needed so that under Size and rotate, the Height box is set to 2” and the Width box is set to 2.67”. Resize the picture under Size and rotate by entering values into the Height and Width boxes. Crop the picture under Crop from by entering values into the Left, Right, Top, and Bottom boxes.
Repeat the process in steps 6-8 to create another duplicate picture, for a total of three pictures.
Drag the pictures to form a row across the slide.
Press and hold SHIFT and select all three pictures. On the Home tab, in the Drawing group, click Arrange, and then do the following:
Point to Align, and then click Align to Slide.
Point to Align, and then click Align Middle.
Point to Align, and then click Distribute Horizontally.
Click Group.
On the Home tab, in the Drawing group, click Shape Effects, point to 3-D Rotation, and then under Perspective click Perspective Contrasting Left (second row, fourth option from the left).
Drag the group to position it above the middle of the slide and also position it slightly to the left.
To reproduce the background effects on this slide, do the following:
Right-click the slide background area, and then click Format Background. In the Format Background dialog box, click Fill in the left pane, select Gradient fill in the Fill pane, and then do the following:
In the Type list, select Linear.
Click the button next to Direction, and then click Linear Down (first row, second option from the left).
In the Angle text box, enter 90⁰.
Under Gradient stops, click Add gradient stops or Remove gradient stops until two stops appear in the slider.
Also under Gradient stops, customize the gradient stops that you added as follows:
Select the first stop in the slider, and then do the following:
In the Position box, enter 50%.
Click the button next to Color, and then under Theme Colors click Black, Text 1, Lighter 5% (sixth row, second option from the left).
Select the second stop in the slider, and then do the following:
In the Position box, enter 100%.
Click the button next to Color, and then under Theme Colors click Dark Blue, Text 2, Lighter 40% (fourth row, fourth option from the left).
Framed pictures in a row
(Intermediate)
To reproduce the picture effects on this slide, do the following:
On the Home tab, in the Slides group, click Layout and then click Blank.
On the Insert tab, in the Images group, click Picture.
In the Insert Picture dialog box, select a picture and then click Insert.
Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Format Picture dialog box, resize or crop the image so that the height is set to 2” and the width is set to 2.67”.
To crop the picture, click Crop in the left pane, and in the Crop pane, under Crop position, enter values into the Height, Width, Left, and Top boxes.
To resize the picture, click Size in the left pane, and in the right pane, under Size and rotate, enter values into the Height and Width boxes.
On the slide, select the picture. Under Picture Tools, on the Format tab, in the Picture Styles group, click Quick Styles, and then click Reflected Bevel, Black (fifth row, fifth option from the left).
Select the picture. On the Home tab, in the Clipboard group, click the arrow to the right of Copy, and then click Duplicate.
Right-click the duplicate picture, and then click Change Picture. In the Insert Picture dialog box, select a picture and click Insert.
If the inserted picture is a different height and width, adjust the width and height. Under Picture Tools, on the Format tab, in the bottom right corner of the Size group, click the Size and Position dialog box launcher. In the Size and Position dialog box, on the Size tab, resize or crop the picture as needed so that under Size and rotate, the Height box is set to 2” and the Width box is set to 2.67”. Resize the picture under Size and rotate by entering values into the Height and Width boxes. Crop the picture under Crop from by entering values into the Left, Right, Top, and Bottom boxes.
Repeat the process in steps 6-8 to create another duplicate picture, for a total of three pictures.
Drag the pictures to form a row across the slide.
Press and hold SHIFT and select all three pictures. On the Home tab, in the Drawing group, click Arrange, and then do the following:
Point to Align, and then click Align to Slide.
Point to Align, and then click Align Middle.
Point to Align, and then click Distribute Horizontally.
Click Group.
On the Home tab, in the Drawing group, click Shape Effects, point to 3-D Rotation, and then under Perspective click Perspective Contrasting Left (second row, fourth option from the left).
Drag the group to position it above the middle of the slide and also position it slightly to the left.
To reproduce the background effects on this slide, do the following:
Right-click the slide background area, and then click Format Background. In the Format Background dialog box, click Fill in the left pane, select Gradient fill in the Fill pane, and then do the following:
In the Type list, select Linear.
Click the button next to Direction, and then click Linear Down (first row, second option from the left).
In the Angle text box, enter 90⁰.
Under Gradient stops, click Add gradient stops or Remove gradient stops until two stops appear in the slider.
Also under Gradient stops, customize the gradient stops that you added as follows:
Select the first stop in the slider, and then do the following:
In the Position box, enter 50%.
Click the button next to Color, and then under Theme Colors click Black, Text 1, Lighter 5% (sixth row, second option from the left).
Select the second stop in the slider, and then do the following:
In the Position box, enter 100%.
Click the button next to Color, and then under Theme Colors click Dark Blue, Text 2, Lighter 40% (fourth row, fourth option from the left).