28 Sep 2015


Last week, we brought you the first piece of a three-part series on developing for the Apple TV, and put the focus on what familiarities devs can expect to encounter. Here, in part two, our skilled team shines the light on the differences… Looking for part one or our riveting conclusion?

The biggest difference between iOS 9 and tvOS is the introduction of TVML and TVJS, TV Markup Language and TV javascript respectively. TVML is a markup language created by Apple that closely resembles HTML/XML. Through a tree of TVML elements, an app’s UI and much of the functionality is created. Each new screen in app, traditionally represented by a view controller, has a .xml.js file. Apple has created quite a few different types of UI elements available for use. (The reference for those elements is here.) There are familiar UI pieces, such as buttons and search fields, and there are also new concepts, such as lockout. Lockout behaves similarly to a super view in that they both contain subelements. UI elements can have properties such as an src, which is typically a URL to a resource that the element should display. Navigation functionality can also be defined in these properties.

Apple has provided several dozen premade TVML templates that work well for content display and navigation on the TV. These templates provide a strong foundation for UX/UI design on the Apple TV.

TVJS is the javascript engine that determines which TVML file to show, and it responds to user interaction. When using TVJS, there still needs to be traditional application delegate that uses a TVApplicationControllerContext to tell the system the location of the initial javascript file. Apple has provided an example app using TVML and TVJS here. TVJS incorporates many of the standard Document Object Module classes listed here.

The TVML/TVJS approach, called a client-server app, is interesting because the files that run the app do not have to ship with the binary and can be stored on a remote server. This will allow the app’s function and look to be changed without requiring an update to the store. Couple this with the fact the Apple TV is guaranteed to be online when your app is running, and a compelling argument can be made for this style of app. All in all, web developers will feel right home creating a client-server app, and a larger group of developers will be able to create apps for an Apple product using their current skill set.

The focus engine is also a new mechanism built into tvOS. Using the remote, the user can move focus around to different UI elements on the screen. UIKit has been updated to be compatible with the focus engine. Assigning focus is completely handled by the OS, but it is possible to update focus programmatically as enumerated here.

From Apple:

“The focus engine communicates with your app using the UIFocusEnvironment protocol, which defines the focus behavior for a branch of the view hierarchy. UIKit classes that conform to this protocol include UIViewUIViewController, UIWindow, and UIPresentationController – in other words, classes that are either directly or indirectly in control of views on the screen. Overriding UIFocusEnvironment methods in your views and view controllers lets you control the focus behavior in your app.”

Also, tvOS introduces parallaxing images, adding some flare to the focus engine. When an image has focus and the user rotates his or her finger around the touch pad on the remote, the image will move with their finger. UIImageViews automatically do this, and super views of UIImageViews automatically parallax and will maintain this behavior while in a super view if the property adjustsImageWhenAncestorFocused is set to true.

Apple also created a new image format LSR. This allows artists to create layers in a single image, and those layers will move in opposite directions, adding to the parallax effect. The guide to creating those images is here.

Finally, unlike iOS, tvOS assumes most of your assets will be remote. In fact, the maximum size for a tvOS binary is 200mb. A new system of grouping remote assets with tags has been devised, and when creating assets in Xcode, they can be tagged and uploaded to the app store. When the app needs a particular group of assets, it queries for a specific tag and then downloads them. The idea is to lazy load the the large images that are going to be needed for supporting 1080p screens.


Looking for part one or our riveting conclusion?