How Apple's ARKit Changed Reality

A few years ago, Apple CEO Tim Cook said: "We believe Augmented Reality is going to change the way we use technology forever."

From that point on, Apple has invested massive amounts of energy and resources into the creation of Augmented Reality (AR) frameworks. The aim? Making it easy for developers to implement AR features in their apps. Bringing virtual content to regular consumers. Expanding this technology into movie-like reality.

It’s now all happening. How’d we get here?

In 2017, Apple introduced the ARKit framework, which allows devices to use their front and back cameras to understand their surroundings. At the time, ARKit was equipped with an initial set of features that allowed for a basic understanding of, and interactions with,  the environment. Since then, new features and improvements have been announced every year.

The most recent updates were introduced at WWDC 2019, the Apple conference for developers that took place in San Jose, CA, back in June. We got the opportunity to attend—and from what we learned about ARKit 3.0, it’s clear that more and more users will be bumping into strangers on the street as they play games utilizing AR on their devices. Anytime, anywhere.

Following Apple’s newest announcements, let’s take a look at the newest additions that make ARKit 3 revolutionary, how our own developers experiment with possible uses—and how, version after version, the framework never fails to evolve and impress.


ARKit 3.0 - The Most Impactful Version to Date

Quite a few exciting features coming this year. As expected.

The framework is now able to detect a person in a scene, to understand body motions and to enable a virtual character to move through the movements of a real person. This can all be used to build a dancing contest app, a fighting game or a virtual golf tournament. A customer can try on virtual clothes before making an e-shop purchase. And taking this technology to a whole new level, a machine-learning model can be trained to understand body motions and to let us respond to them.

The opportunities this feature brings are nearly endless.

One smaller but welcome enhancement is that unlike the previous versions, ARKit 3 is capable of tracking more than one face. Considering the beauty industry, this could be utilized in an app where a user can try on lipstick shades with his/her friends, establishing which shade matches which person best.

Another slightly hidden but huge addition? Collaborative sessions. While ARKit 2 enabled building a fictional world in your bedroom (more on that later), there was never an easy way to do this in real-time with two users building one world. But thanks to the collaborative sessions in ARKit 3, this limitation is gone. More devices are now able to combine their understanding of the environment around them, and ARKit can even tell where other devices are in relation to you. This obviously has huge potential in gaming because it is now simple to create a game where you compete with your friends in the real world.


STRV & ARKit - Years of Rewarding Experiments

At STRV, we believe that AR will be an integral part of countless mobile apps in the future. So for quite some time, we’ve been paying close attention to it in our research projects.

Our first idea was to create an old-school shooting game with aliens. So we did. And we called it Alien Shooter.

The rules are simple: You need to find an alien ship that can come at you from any direction, aim at it with your camera, and shoot. Even though the game is very straightforward, we were able to test and play around with many essential principles of AR development—like placing virtual content onto a scene, detecting surfaces, animating virtual content, interacting with it and more.

Of course, we couldn't resist trying our hand at face tracking experiments. So we developed an app that lets users place a variety of objects on their face or head. Hats, bunny ears, glasses, weird masks, you name it. And because ARKit allows for tracking of around 50 facial features—like eye blinking, pupil position, smiling, frowning—we took advantage of this as well and created a simple model that determines a person’s feelings based on his/her facial expression.

Another common use case of AR is enriching the world around us with useful information. Our team decided to experiment with AR navigation; instead of just following a line drawn on top of a map, you can look for arrows that show you the way on a pavement or at street corners. And to make sure you don't miss an arrow, there’s also a virtual animated character who is always a few steps ahead of you.

It’s important to state that AR goes hand in hand with machine learning. If you want to implement a feature that is not supported by ARKit, like real-time hand tracking, you have to train a model for that. Thanks to several useful libraries and frameworks, we were able to develop real-time hand and finger tracking. So if you’re thinking of buying a wedding ring online, but you want to see what it’ll look like on your finger—no problem.

Right now, AR has a stigma of being a technology with a strong wow effect, but with little practical use. We believe that’s completely wrong. There is so much AR can do for a variety of businesses. All that’s needed is to know the possibilities top to bottom.

We see huge potential in integrating AR to mobile apps and on Apple devices. ARKit makes this extremely easy. Our iOS department has been working with AR for years, and we plan to continue experimenting with all of the different use cases it enables.

Source: STRV

Before ARKit 3 - The Evolution of Apple’s Efforts

Along with the newest additions found in ARKit 3, the framework includes everything its previous versions introduced. For those who aren’t familiar, we’ve outlined the most noteworthy features from the past years, and what they made possible for the world of development.


ARKit 1.0 - The Story Begins

The first version of ARKit didn't introduce many features, but it was groundbreaking. The framework was capable of tracking the world around a device, and it could understand the position and orientation of the device relative to an initial point quite reliably. It was also able to detect a horizontal plane and track a face with an iPhone X’s front camera.

As is customary for Apple, all of this was provided to developers in a convenient, easy-to-use way—which is why the first version of ARKit was so revolutionary. Thus AR became something that could be implemented in an iPhone or iPad app by any iOS developer.

Even though the first set of features wasn’t particularly rich, it was enough to make things like  placing a virtual sofa into a living room to see if it’ll match trivial. Another thing made possible? Playing a football match in the dining room (because a table is also a horizontal plane).

Face tracking made it simple to implement Instagram-like or Snapchat-like filters that place a bunny nose or a cowboy hat on anyone’s head.

One notable mobile gaming blockbuster was 2016’s Pokemon Go. Its foundation was a great business idea, but a huge part of its success was the AR experience aspect, something very unique on the mobile apps market at that time. At least until ARKit came along one year later.

ARKit 1.5 - Recognize & Realize

Released just six months after ARKit 1.0, this version was introduced with the capabilities to detect vertical planes and predefined 2D images—allowing users to place virtual furniture and home decor not only on the floor of a room, but also on a wall. Testing out wallpapers became a breeze.

Moreover, ARKit could now be “taught” to recognize several static images. Whenever one of those images got into the camera's view, the app is informed about it. This makes it possible to develop a game where players are supposed to find checkpoints marked with a special logo; when ARKit spots a certain logo, the player is presented with virtual content relevant for the checkpoint. This feature also enables creating a gallery guide that offers detailed descriptions of paintings as soon as they get into a camera’s view.

ARKit 2.0 - Save the World

Version 2 was released in September 2018 and brought detection capabilities enriched with predefined 3D object recognition. The need for a 2D logo marking a checkpoint was through; now, 3D models of a virtual city guide can be placed all around a city, allowing users to point their cameras and get the full history of the place or thing they are examining.

Also thanks to ARKit 2? Not only detecting 2D images, but also tracking them. This means that when a predefined logo is detected and comes with virtual content attached to it, the content moves along with the logo in real-time. Imagine an interactive exhibition about car manufacturing where a 2D model of a car moves on the assembly line, and visitors can watch on their screens as a virtual vehicle is completed on its journey down the conveyor belt. That’s just undeniably cool.

And one more very impressive feature worth mentioning: Let's say you want to build a fictional virtual world in the right corner of your bedroom. But you don’t have time to build a whole world at once, so you need to save your progress. Then, when you launch the app next time and you point your camera to the corner where you began your construction—three it is. In the very stage where you left off. How? ARKit 2 added an easy way to store all knowledge about the surrounding world and the virtual content in it, so an experience from one session can be reloaded at the next app launch, and it can even be shared with another device.

What's Next?

Apple shows no signs of slowing down. And the AR takeover is well on its way. What does that mean for developers?

It means we need to stay alert, to continue learning and to always be looking for something that’s never been done before. That’s how we see it, at least.


Share Article
Jan Schwarz

Jan Schwarz

You might also like...