About this talk
What's new, interesting, fun and impactful for all things iOS and WWDC.
So the first thing you want to notice on the home screen, is we have a new dock at the bottom now. Previously you could fit around six items in here on iPad. You can now cram tonnes of stuff in here. This area on the right hand side is like a kind of predictive area. So it will suggest apps you might have opened recently or want to open. It also works with handoff and continuity. So if I open the, oops, the Messages app on my Mac, you might have seen it just pop up on the side there, 'cause I might want to transfer a conversation over. Thank you. Did that show up on there? I put Do Not Disturb on. It's come on on my Mac and it should be off. There we go. Okay. Right. So yeah, so there's the dock. You can drag items into here, like you previously could. You can also obviously launch items from here, lets launch Maps. You can now access the dock from anywhere, so if we swipe up from the bottom of the screen, there's the dock with my items in it. Again I can launch from there. Now for switching from multitasking, you can drag items straight out of here. I can drag it over here for slide over mode. You can switch it to the other side of the screen, if you want. You can drag it off the side to pull it back later if you want just by sliding on from the edge. Oops. I want it on the right hand side. And you can also pull it down to pull it into the standard kind of split view mode as well. There's a new app switcher. So if I slide up from the bottom, this is the app switcher mode. So these are my various apps I had open. This area on the right is control centre, which you previously got from the bottom of the screen. They've redone all of this. It's all completely configurable in settings, so you can turn things on and off. It's also got screen recording built in, so you can just tap that button, it'll record whatever you do straight to the camera roll, which is really useful. These widgets on here give you more information if you press and hold on them as well. See, tells you what these things do. For Airplay it tells you the different things available to you. It also saves what they're calling spaces, so if you've had particular apps open together, like Mail and Safari, it remembers that I had those open together and it'll bring them back at the same time. One of the other big things that they've brought, if this will load, this is obviously beta software, this is beta one, so some of it is a bit buggy. They've now brought Drag and Drop to iOS. It's more powerful on iPad because you can drag between apps. On iPhone you can drag within a single app. So I can select some text here, this is Safari on the right hand side. I can just pick it up and drop it in an email here in Mail, between two apps which is really nice. If I scroll down perhaps I've got an image, or I can pick this up and drop it in my mail. I can even grab the url from the top of Safari there and pull it down into my email somewhere, which is really nice. If I pop out a second and I open up Notes. That's from my demo this morning. How do I start a new note? How do I do a new note? Oh yeah. Thank you. Oops. So if I open up Photos and Notes together. You're not limited to just dragging a single item. So if I come into here, go to my photos, I can pick up one, I can then tap on other items, drag them to my drag stack. So I can get a whole load here and drag them all into Notes at the same time. I can drag stuff back into Photos if I want to as well. Whilst I'm in Notes, I'll show you a few other quick things. The keyboard now has this cool, I think they call it a "quick type" keyboard, so you can pull down on letters, oops, trying to show you the animations. So if you pull down, you'll trigger whatever the symbol at the top is, which is super useful if you've got passwords with complicated symbols and stuff, you don't have to switch between keyboards anymore. There's now a document scanner built into Notes. Because why not? But it works really well. So I can point it at this. You can see it doing detection of the rectangle here. There's actually an API for doing rectangle detection like this yourself, which I'll have a quick look at later. But I can just snap a picture of that. You can drag the corners around, and it will fix all the orientation and colours and everything for you, which is super nice. Finally, let's just have a super quick look at, sorry? Apparently it should be able to make some handwriting. I don't know if it does it from documents. It's not showing up on the side there but it can make some handwriting searchable and if you draw with the Apple Pen, it all becomes searchable as well. The App Store has been redesigned. I won't spend long on that. But it's a whole new way of looking at your apps. I don't know. They've built in basically stories about top apps, so you can find out more about them, find out about related things. So here's kind of an article about Monument Valley 2, and then there's a link to the app at the bottom. One thing to notice in here is the new style for tap bars. On a wide display, previously you'd have the image stacked up a top the title, now they're side by side to make that ease the text. Um, the product pages in here now look a lot like Google Play Product Pages. In that you have the ratings and what have you at the top. You have the reviews further down with replies, which you can obviously now do on iOS. That's that. The final thing to show you is, there's a new app called Files. This replaces the previous iCloud Drive App. And this is basically a central place for all your documents, both on your iPad and in The Cloud. So at the moment this shows iType Drive online, but apps like Dropbox and what have you, will be able to add their own storage into here. Developers can add it themselves. There's a couple of APIs, you can add an extension to your app, which serves up your files to be used. This all supports Drag and Drop as well. You can drag files into here. If I switch over to Mail, I received an email earlier with an attachment. I can just pick it up from Mail and I can drop it into a folder in Files and that'll be available in iCloud drive anywhere. So that's kinda cool. And again you can drag them backwards. If I'm composing an email and I want to stick a file in there, I can just pull it out of here. The other thing apps will be able to do, is present a document picker, using the Files app within your own app. So it should kind of centralise all of that. So that's just a quick look at some of the changes to the OS from the user point of view. So in terms of APIs there's kind of three big features that Apple was pushing. And the first of these which obviously we just saw, was Drag and Drop. As I said, iPad you can drag between apps, iPhone, only within a single app. Text fields and text views get this behaviour for free. You can customise it depending on the sort of things you want to be able to drag. If you want to add it to Custom Views it's really straight forward. There's two new things called UIDragInteraction and UIDropInteraction. You can just create these not into your views and then customise how you want things to work. So, here I'm creating a drop interaction, adding it to an image view, and now that image view will be able to accept drops. There's a few delicate methods you can implement. So for instance, it will ask you if you can handle a drop selection. So I'm saying here, I can only handle drops of images and only one at a time. You also get notified when the drag started and ended, so you can update your UI to indicate that something can be dropped on. This asks you to determine what's going to happen when you drop. So you can do things like, you can copy a file or you can move it completely and take it away from one place. So if you're rearranging rows like in a table view, you'd want it to move, 'cause you wanna delete it from its original place and put it somewhere else. And then when you actually perform the drop, you get a drop session. You can tell it's load objects with a particular type and then update your UI as you need. Sometimes, perhaps if you are loading bigger objects, I've seen if you're pulling photos up and they need to be fetched from the web you can see a loading UI while those get fully loaded. And you can kind of customise that how you'd like. Don't know what that sound was. Collection view and table view, they have all of this built in as well. Each of them both now have a drag delegate and a drop delegate property. And they have similar methods to the interaction so you can customise the way the dragging happens, where things go when they get dropped, that kind of thing. It's a really powerful API and the documentation is really good for it. There's loads of sound pours and it talks you through all the steps of the process. The next big thing is Core ML. So Apple use Machine Learning through all of their products, from object recognition in photos so you can search them, Siri, predicting words and responses when you're typing. So now they're trying to open it up so it's easier for anyone to incorporate it into their own apps. So it's got quite a simple API and it's built into axe code. And it's built on top of Apple's Accelerate and Metal frameworks, which run on the GPU, so things are potentially super fast. So Core ML is kind of a base layer for all of this. Then they have some frameworks on top of that. They're using it to power their Natural Language Processing, for recognising text and that kind of thing. Even GameplayKit uses it for AI in games. We'll look at the Vision framework in a second. That's kind of specialised towards computer vision. I'm told it supports all of these kinds of models. That means nothing to me, I don't really know much about computer, Machine Learning at all. But if you do, hopefully that means something to you. So it takes in a model. They're using the file for Mac called amel model. Apple have tools to convert things to this correct format and currently they are open sourcing it all as well. So a trained model is the result of applying Machine Learning Algorithms to a set of trained data. And then the model will take an input and predict some outputs based on that. So that could be as I said earlier, tagging images, handwriting recognition, predicting text, that kind of thing. Apple have got a bunch of popular models that they have already converted to the right format. Most of these seem to be around finding objects and images. As I said, they've also got tools for converting your own models. You basically just drag it into Xcode, it'll tell you all about the model, the inputs it takes. So this one takes images, and it gives you some strings and classifications out. And it will generate a swift interface for interacting with that model. So this particular one, you can get a prediction by giving it an image and then it will give you some output telling you what's in it. So I'll do a super quick demo of that as well. If this is still up. I have no idea if this will predict the right things. This is using one of the image prediction models Apple has. So I'm just going to point it at some stuff and see if it can work out what they are. Water bottle. So this is showing what it's detected is in the model and then the number is the confidence level between zero and one. How about this? Drumstick or fountain pen. Come one. Screwdriver. Bottle opener. Corkscrew. Okay, sometimes it works and sometimes it doesn't. But anyways, this took a few minutes to put together really. It's taking the image data and spitting it straight out into a model that the app will provide for you. So that's kind of cool. - [Audience Member] Does that need to be online to work? - No so their aim is that it's all on device. Apple are obviously very big on their privacy and what have you. And they said they've optimised it all to run on device, but also not consume too much power and that kind of thing. So it's all running on the GPU on the device essentially. So the Vision framework is built on top of this. And this is targeted specifically at computer vision. So you can do things like face recognition including difficult faces on the side or obscured faces. It'll help you, it can pick out all the features of the face. If you've ever used face detection in Core Image, you'd just get simple blocks like this around the eyes and mouth, but now you get all the features. It'll do image registration if you're doing things like, panorama, stitching images together. Rectangle detection, which we saw with scanning documents earlier on. It can do text detection images, bar codes. It can do object tracking in movies. And again there's a really simple API for all of this. So here's an example of a face detection. You just create a face detection request. Give it an image and then it will spit you out some results. For things like the object tracking, again it's very similar API. You have a sequence request. You give it some stuff to track, you tell it what you want to look for and it will just spit out results as it goes. And there's different requests for these for all the different types of recognition you might want to do. So the final kind of big thing Apple really pushed, was ARKit. So this is augmented reality. Apple are trying to get into this into a big way. It uses Visual-Inertial Odometry, which again I don't know much about. But it uses computer vision analysis to track key features in video from your phone. And then it also uses your device's motion and brings that all together to understand the scene that you are looking at. So it'll try and find flat planes within a scene, you can also hit test objects to detect how far away they are. And it can also estimate lighting in your scenes as well. And there's built in support for SceneKit and SpriteKit. These are Apple's mainly game framework. So your SceneKit is for 3-D and SpriteKit is for 2-D. Again it's the documentation is really good for these, and there's really good template projects for both of those as well to get started. I'll show you a quick demo of these. Um ... Whether this will work when I tap it again I don't know. Cool. So this is obviously the room. So this is a simple SpriteKit demo, putting 2-D objects in the scene. So I can tap here. I can't go very far 'cause I'm tethered to the Mac, but I can put some Sprites in the scene here, and you can see they just hover above the floor. The tracking seems really solid with Apple Solution. So it can move all around those. So these are 2-D objects, so as I move, they'll rotate to face me. But again they stay really solidly within the scene there. Which is cool. This is Apple's kind of template app for this. There's not much code to get started. This is their scene kit example. Let's see if this'll work. It needs, sometimes it takes a second to pick up the planes of the room. So there's just putting a 3-D model in the scene. And in terms of code for these, it's just a case of, if you're using SpriteKit or SceneKit, you can create an AR scene, and it basically takes care of it for you. You can place your objects within that, create anchors where things are. So again, I can place an object in here. So let's put a chair in the room. That looks like quite a big chair actually. I can shrink it down a bit. But I can walk right round it here. I can move closer if I want. But it seems really rock solid tracking. Pull everything off there. But yeah, I'm really impressed with the way they've got it working. If I move up here it'll change and detect a different plane up here. I can put something else on here. Let's see, a nice candle. And again, oh, that's sliding off. This first version or this demo, this demo app, I find if you switch between two different surfaces, sometimes it'll just completely lose track of which one you're looking at. But let's try one more time. Yeah, that's not right. That's a floating cup. Let's put it down on here. There we go, that's better. And now I can move around it and look at it. What have you, so, yeah, it's really impressive. It also supports Unity and Unreal apparently at the moment as well. And like I said, really, really quick and easy to get started with. So, some other stuff. This is just a random collection, I'm by no means going to cover everything they introduced. SiriKit was introduced last year to allow you to interact with Siri from your own apps. They introduced a small set of, I can't remember what they call them. - [Audience Member] Intents. - Intents or areas that you can interact with. So Payments, Photo search, Ride Booking, Messages, domains, that's the word I was thinking of, that you can interact with. They've now added a few more, so payments can now do transfers, like bank transfers and things. You can also do interactions with Lists and Notes from Siri. After they finally opened up Core NFC, so they've got a CORE NFC framework. There's a few caveats in that it's only while your app is in the foreground. You can only do reading not writing of NFC. And it's only on the iPhone 7 and 7 plus, even though some of the older devices to have NFC support. If you're an NFC nerd, apparently it's NDEF tags types one to five, if that means anything to anybody. PDFKit. This was previously only available on Mac OS, it's now on iOS. If you've ever done any PDF creation before, you'll probably have dealt with core graphics and it's C-level APIs. Now there's a nice, new, modern, swift, and objective C-way of doing PDFs. But it's for viewing and for creating, opening, modifying, drawing things on PDFs. You can also select and search for text, that kind of thing. So it's nice to see. Password Autofill. So this is accessing your Safari saved passwords from an app. You can now, essentially you have to just tag your text fields as being usernames and passwords. You link your app with a website or an associated domain and then when these is typing, they'll get suggestions for passwords. They can also tap on the key there, authenticate with their device and see a list of all their logins to fill in something else. So hopefully this'll be able to streamline the login process, in some apps if you've got an existing web interface to go with it. MapKit has some nice additions. So these, the existing pins that we have, or built in pins we had in MapKit, you'd have to tap on a pin to find out more about it, to see the title and subtitle. They've now introduced these new marker annotations, which have title and subtitle included. You can tint them, you can add images, text to them. MapKit now supports clustering as well, which previously working in an agency had to mend this quite a few times and find some third party for iMap to do it with. So if you've got a map with loads of pins on it, you can now set a display priority, which tells it which ones to hide if there's not enough room for all of them. You can also set a clustering identifier so it will group similar items. So this is supposed to be a bike app which has bikes and unicycles. You could give them different clustering identifiers, so it wouldn't group the bikes with the unicycles for example. And these, these cluster annotations are fully customizable as well, so you can provide your own annotations for those, which is really nice. So next, a few additions to Xcode 9. It's got a new editor which is really nice. This is apparently built on, based on the one Apple built for Swift Playgrounds on iPad. They say it's super fast, it can handle big files. I tested it on a, was either a 15 or 20,000 line file, which Xcode 8 completely chokes on and it pretty much worked. For the most part it seemed fairly solid. It's got a better handling of formatting, of fonts and sizes. You can change the size easier as you're editing text. If you have issues in your code, like warnings and errors, those now flow back to arrange your text. One big thing, they've now added to Xcode 9 is Refactoring. This has been lauded over us for quite a while in terms of Swifts, that you can't refactor anything. I don't think anyone here is excited enough about this. One second. Let's try this instead. ♫ Hallelujah ♫ Hallelujah - I think that's more appropriate. So there's a nice set of refactorings to start with. Things like adding missing protocol requirements, extracting variables and methods, expanding switch statements. And I think Apple said they're open sourcing the engine behind this so it's going to be easier for people to extend in the future. And so it's nice to finally have this support. GitHub is now built in to Xcode. So you can easily clone repositories and upload and download and that kind of thing. Wireless Debugging is now built in, which is really cool. So as long as you plugged in a device once to debug to it, you can then unplug it, and from then on, it just shows up in the device menu and you can just build and run straight to it. And the contour works as you'd expect. It also works with instruments and things as well. And it seems very good. There's a Main Thread Checker. So this will run time pickup occurrences of doing things like UI work, not on the main thread. It helps track down those issues which can be quite hard to find. It also now handles PDF Vector Assets. So you've been able to stick in PDF assets in 41A, but at compile time they'll get built down to as far as PNGs at the correct sizes you need. And now it all used to fall back to data so you can get dynamic sizes on the fly as the app is running. You can also now name colours. So if you've got a specific colour palette for your app, you can name them and insantiate them by colour, at run time which is cool. iTunes Connect has a whole bunch of additions. If you're not familiar, this is Apple's system for loading apps to the app store. I won't go into everything, but two things worth picking out are, you can now, up until now, no, no, no, when you upload an app, up until now, it will reset your rating on the app store. Now you can stop that happening. You can keep your rating when you upload new books to the app, which is really nice. You can also now do phased releases, like Android, which is really cool. So you can roll an app out to a certain set of customers, over a period of time and grow it as it goes. Finally a few additions to Swift, just a couple of small ones. First is with KVO or Key-Value Observing. So this is for monitoring values of objects over time. Previously if you were doing this in Swift, you'd have to create a context for your observer of an object. Then you'd have to register it to observe something. So here I'm observing this person's age. You'd have to implement this method, observe value for key path of object change context. Check, you are supposed to be handling it, do what you want with it, if not, run it's course super. It's all kinds of horrible. Now, it's as simple as this. You can just say person.observe, this is a new way of specifying a keeper. And then it will give you your changes, which is nice. Codable. This is a new system Apple put in for serialising and deserializing to different formats. So here we've got some JSON and perhaps some Swift struct that maps to that. Now we need to do is say your custom type confirms to Codable. And all we need to do to do that is make sure all the types within it are also Codable and all these foundation types are. Then you can create JSON decoder. Tell it to decode to your type from the JSON data and job done. That's the same for encoding. There's other types supported by property lists. And it's very flexible so if, perhaps your keys don't match one to one between your JSON and your object, you can customise the way all of that works. For example with JSON as well, we can say how you want dates to be decoded. For Swift, I would recommend, there's a playground you can download called, "What's new in Swift 4". And it takes you through all of the new features. That's available from here. And finally if you're going to watch WWDC videos yourself, to check out what's new, I'd highly recommend going to wwdc.io, it's an unofficial WWDC app. You can get all the sessions in there. You can mark which ones you've already seen, mark favourites, download stuff, play them faster than one speed, so if you have to get through a load, one and a half speed. Two times is too much, but one and a half is okay. Yeah, oh yeah. Get all the resources, that kind of thing. So that's a really useful thing to look up. And that's basically it.