The View from The Shard has introduced two Virtual Reality experiences designed to test your nerves when you visit the tourist attraction in London.
1 Perception: Immersive experiences are scripted productions.
Early versions of immersive technologies, which include augmented reality (AR) and virtual reality (VR), resemble their video game forebears in that they are essentially journeys of discovery through different stages of preprogrammed experiences. We can scale virtual cliffs and mountains while riding a roller coaster or stumble over park benches in pursuit of Pokémon Go characters. However, as immersive technologies become imbued with machine learning and AI, digital experiences will become increasingly multisensory, making them more convincingly “real.” For example, Fast Company reports that surgeons can now practice a procedure using VR with a stylus that simulates the feel of operating on an open knee joint. The AR and VR of the future will gather information from the surrounding physical environment and instantly pass it back to an AI for analysis in order to derive unique, in-the-moment responses to our actions.
2 Perception: You need bulky equipment.
We won’t be wearing those silly goggles forever. As the sensors that pick up data from our movements and speech become smaller, they will be easier to embed in everything. Imagine being in a factory in which every object has a visual overlay that lets you drill into information about that object, handle a digital version of it, or control it remotely. Today, firefighters can wear a smart helmet from Qwake Tech that combines AR technology with a thermal imaging camera. The device outlines the edges of objects (such as doors and stairs) and highlights sources of high heat, enabling firefighters to move through buildings more quickly. Companies including BMW are experimenting with advanced gesture recognition technology that would enable users to control devices without having to touch them. You might soon be able to launch a video chat by waving your hand.
3 Perception: A physical presence is required.
For now. But before long, you’ll be able to create a VR avatar that looks like you, that sounds like you, and that can meet with your colleagues’ VR avatars in a realistic virtual space. The technology will likely require a brain-computer interface such as a headset or a brain-implanted chip. Neurable has a prototype software platform to power headset sensors that let users maneuver in VR video games using only their thoughts. Given sufficient computing power and a smart enough AI, you may one day be able to program your VR avatar to participate in a virtual meeting, tour the digital twin of a factory, or attend a keynote speech as your proxy and (theoretically) do a good enough job that your colleagues would never guess it wasn’t actually you. That will raise questions about how to tell if an avatar is being controlled live by a human or operated by a bot—and whether to require the differences be obvious.
Source: Digital Economy
Definition: Augmented reality (AR)
AR is a live direct or indirect view of a physical, real-world environment whose elements are „augmented“ by computer-generated perceptual information, ideally across multiple sensory modalities, including visual, auditory, haptic, somatasensory, and olfactory.
The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is spatial registered with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, Augmented reality alters one’s current perception of a real world environment, whereas virtual reality replaces the real world environment with a simulated one. Augmented Reality is related to a synonymous term called computer-mediated reality.
Augmented reality is used to enhance the natural environments or situations and offer perceptually enriched experiences. With the help of advanced AR technologies (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space.
Augmented reality brings the components of the digital world into a person’s perceived real world. The first functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force’s Armstrong Labs in 1992. Another example is an AR helmet for construction workers which display information about the construction sites. reality is also transforming the world of education, where content may be accessed by scanning or viewing an image with a mobile device. Early immersive augmented reality experiences were used in the entertainment and gaming businesses, but now other industries are also getting interested about AR’s possibilities for example in knowledge sharing, educating, managing the information flood and organizing distant meetings.
Augmented reality has a lot of potential in gathering and sharing tacit knowledge. Augmentation techniques are typically performed in real time and in semantic context with environmental elements. Immersive perceptual information is sometimes combined with supplemental information like scores over a live video feed of a sporting event. This combines the benefits of augmented reality technology and heads up display technology (HUD).
The way we present data to others has evolved from simple pie charts and bar graphs to sophisticated interactive visualizations drawing on real-time data sets. This helps us to communicate insights faster and more effectively.
It’s only the start though, and as the data available to us grows increasingly complex and fast-moving, techniques for presenting it are continuing to evolve. Today’s Big Data projects often involve amalgamating hundreds of data sources, structured and unstructured, and it’s likely that 2D images, or even 3D ones presented on a flat screen, will no longer cut the mustard.
Virtual reality and augmented reality – at the moment primarily considered a medium for delivering entertainment – offer the intriguing possibility of letting us “step inside” the data. 360-degree vision instantly broadens the available canvas, and interactions become more intuitive as we can reach out to touch and manipulate what is shown to us.
One provider of such a solution is Virtualitics, whose technology is currently being tested by clients in the finance, pharmaceuticals and energy industries. Their CEO and founder Michael Amori talked about how fusing machine learning with VR and AR-driven reporting will help unlock the potential of Big Data for an ever-growing range of organization’s and enterprises.
“The motivation is simple, the amount of data in the world doubles every year according to some sources and the amount of that data which gets analyzed is less than one per cent. Having been in charge of a trading desk on Wall Street for seven years, I know that, of that data which does get analyzed, a lot of the time, that analysis isn’t very useful.
“Enterprises are leaving a lot of potentially really useful information on the table, which means they are wasting money.”
This is largely due to the limitations of visualization and reporting tools which aren’t up to the job of presenting complex information in a clear and concise way.
“So, you hire a data science team and they find out that, of 100 metrics you analyze, there are five which produce the outcome you’re interested in. There’s currently no way to visualize all of those metrics at the same time – to see how they all interact – because we’re stuck looking at two or three things in a 2D scatter graph. If what you’re interested in is a function of five things, or eight or ten – you can’t visualize them all at the same time with traditional tools.”
By presenting data inside a 3D canvas which wraps around the user, far more than the traditional three dimensions become available. As well as placement on X, Y or Z co-ordinates, data points can be distinguished by size, color, transparency, as well as direction and velocity of movement.
And while this all may seem fairly abstract it’s important to remember that it’s being presented via a medium which we have evolved over thousands of years to navigate intuitively – 3D space (or the illusion of it, at least). Bar charts, by comparison, have only been a part of our toolset for a couple of hundred years.
This means we can interact with the data in a way that is far more natural – reaching out to manipulate objects with our hands, moving around them to view them from a clearer perspective and highlighting objects of interest with a point of the finger.
Another crucial advantage of a VR or AR environment is that the “canvas” can be inhabited and interrogated by more than one person at the same time – limited only by how many headsets are available.
“We can meet inside a shared virtual office – where we’re both able to touch the data, interact with it – you can ask me ‘hey, what are those red dots over there in the corner?’ and I can tell you, those are the bad bonds – the ones we want to sell.”
VR can also help to eliminate distractions. With the user’s visual – and sometimes aural – senses – entirely dictated by the output of the VR environment, attention can stay fixed on what matters, until the headset is removed.
There are already some great examples of VR-enabled data visualization which anyone can experience with a cheap cardboard headset, for example Google’s exploration of data around the UK Brexit issue. At the moment, these are mostly journalistic or consumer-oriented, but as businesses increasingly struggle to communicate meaning from the ocean of information they are generating, it’s likely that they will increasingly look towards these technologies for enterprise use.
One pioneer in the use of VR in visualization was tire manufacturer Goodyear, which created a simulation allowing every aspect of the performance of their racing tires to be examined, and the effects of changes such as road surface and weather to be calculated in real-time.
Amori tells confidently that “VR and AR will become a huge game-changer for enterprises. Everyone right now things of them as toys – for video games and entertainment – but they have a very serious use and can bring two things to the table – the ability to see many dimensions in data at the same time, and it also provides a collaborative environment which is missing in current analytics tools.
Source: Bernard Marr
Virtual reality (VR) captures people’s attention. This technology has been applied in many sectors such as medicine, industry, education, video games, or tourism. Perhaps its biggest area of interest has been leisure and entertainment.
Regardless the sector, the introduction of virtual or augmented reality had several constraints: it was expensive, it had poor ergonomics, or implied too much work to create contents.
Recent technological innovations, including the rapid adoption of smartphones by society, have facilitated the access to virtual reality and augmented reality of anyone. In addition, several large companies like Apple, Facebook, Samsung, and Magic Leap, among others, have increased their investment to make these technologies to improve their accessibility within the next few years.
Educational institutions will benefit from better accessibility to virtual technologies; this will make it possible to teach in virtual environments that are impossible to visualize in physical classrooms, like accessing into virtual laboratories, visualizing machines, industrial plants, or even medical scenarios.
The huge possibilities of accessible virtual technologies will make it possible to break the boundaries of formal education.
Source: Sci Tech. Ed