WWDC 2015 Wishlist
I stopped being excited about Christmas at around the age of 17. I stopped wondering what Santa would bring and stopped having sleepless nights leading up to the day itself. This made me sad for many years because I felt I lost something when I no longer felt the same exhilaration and anticipation for something new. But then I became an iOS developer, and discovered WWDC and found that I’ve gotten those feelings back again.
WWDC is the new Christmas!
And here I go, writing my WWDChristmas list to Santa Cook and his merry elves a few days before the big event. Not because it matters (I know that Santa Cook isn’t real and even if he were, he wouldn’t be reading the WWDChristmas lists from every developer in the world), but because it’s fun to wish for things I really want and maybe, just maybe, see a few of them come true!
So here, in no particular order, is my WWDC 2015 Wishlist:
Most of the rough edges in Swift smoothed out (see a partial list here) and true consistency throughout the language. Rather than similar things like structs and tuples, functions and closures, etc. being different one-off implementations that are full of weird gotchas, I’d like to see all types reduced into common and consistent abstractions. I want all the awesome concepts and great ideas in Swift to be honed into a logical and intuitive system that works exactly the way I would think it should. Or at least just a few steps closer to that ;)
Swift reflection and optional dynamic runtime features (analogous to objc/runtime.h). Optional in the sense that unless you mark a type as @dynamic, it is not available for dynamic dispatch, runtime manipulation and reflection and instead uses faster static dispatch by default.
A public Swift roadmap. I know that Apple doesn’t announce or share future plans, but we aren’t talking about consumer products here. There is great value in developers knowing what the future plans for a language are and having a sense for which paradigms will be embraced, and which will be discouraged in the near and medium-term. It’s hard to argue that language features being discussed in advance would somehow be a competitive disadvantage for Apple’s business of selling hardware and services. Nor do I think that Apple discussing a future language feature would mean that some other company would gain an advantage by releasing their own brand new language with that feature, or trying to bolt the feature into an existing language. In fact, there really isn’t any other company with sole control of a popular new language that could really compete with Apple in this regard anyway. So why not be nice to developers and expose the Swift language to more and better discussions before a feature is added? It can only benefit everyone.
A better system for UIKit size classes. Currently, I can only associate constraints with pre-specified groups of device sizes and orientations (like a huge iPhone 6 Plus in portrait can only have the same layout as a tiny iPhone 4S in portrait), so there is no way to set up different layouts based on how you want to design for specific screen sizes and orientations. For example, size classes also don’t allow me to create a different layout for iPads in portrait and iPads in landscape. What I’d like to see is a way for developers to specify their own constraint sets for any number of arbitrary screen dimensions. For example, if the screen is between 400 and 800 points in height and between 300 and 500 points in width, then here are the constraints and layout I want applied. This would work great with the resizable simulators and with supporting things like split-screen multitasking, future unannounced screen sizes, etc.
A way to work better with multiple storyboards. Right now, using storyboards is a painful juggling act between trying to keep everything in fewer storyboards so you can easily segue between scenes and reuse embedded view controllers, and trying to split the UI into as many small storyboards as possible so multiple developers can work on them at the same time without merge conflicts and toe-stepping. I’d like to see this solved by something like “symbolic scene links” that allow each scene to live in its own storyboard but be embedded via reference into other storyboards. Once embedded, the scene would appear visually in the other storyboard, could be hooked up to segues, etc., but any edits to the content of the scene itself would only update that scene’s own separate storyboard. This would allow me to go back to one big storyboard for an entire app flow again, but modifying embedded scene in that main storyboard would not change the main storyboard XML (which would contain only a reference to embedded scene, not its actual contents), but instead would only change the XML of the storyboard that the modified scene was referenced from. This would be a huge win for projects that have, you know, more than one developer working on UI at the same time.
More kinds of constraints in auto layout, such as font-based constraints (font size of first item >= font size of second item), scale-based constraints, and other transform-based constraints like rotation, and z-position that would allow for more flexible and creative layouts.
Apple TV SDK to write new kinds of apps for Apple TV, possibly with a new way for users to interact (Kinect-like motion sensing perhaps?)
Brilliant Siri APIs that would allow my apps to declare what types of tasks and requests they can handle (similar to how action extensions do this today), and then when Siri gets a type of request or task from the user that my app says it can handle, it would pass in the parameters provided by the user for that task or request, and allow my app (and any other apps that handle the task or request) to present a small UI card in response (similar to “Today View” extensions in the notification center). My apps card would provide an answer or a user interface for the user to complete the desired task, along with any other apps. For example, if the user asked Siri “What will the weather be like in San Francisco tomorrow?”, Siri would parse that into a request with the category of “weather” and with the parameters “date: [tomorrow’s date], location: San Francisco”. Siri would then activate notification center style extensions for any apps that support “Weather” requests and can handle “date” and “location” parameters for that request, and would pass those parameters in to each one. The user would see a set of cards from the various weather apps displaying the weather forecast for tomorrow in San Francisco in whatever way each app presents it. Just like in the notification center’s Today view, the user could control which apps are allowed to handle each type of request from Siri and also the order in which their cards are displayed.
Siri for OS X that includes the option to type queries (as well as speak them) and can do system level functions on demand, like taking screenshots, running custom Finder searches, and turning Do Not Disturb on and off.
A way for my apps to surface their contents for Spotlight search. If I enter a search term in Spotlight for iOS, I will see messages, emails, contacts, etc. that contain that search term, but not content from other apps. It would be very useful if that were to change, and 3rd party apps could also respond to search queries in Spotlight.
Data bindings in iOS for views in Interface Builder. OS X has them, why can’t we? In the meantime, I’ll keep using and maintaining the awesome SymbiOSis framework to accomplish this in the best way currently possible.</shamelessPlug>
Split screen / side-by-side multitasking for iPad. This is already rumored, and I’m ready for it! I think the ability to have my apps running on the side next to other apps and sharing information across the Great Wall of Sandboxing when the user drags data from one app to the other would be a game changer in usefulness and productivity on the iPad.
Anyone else itching to share their WWDC 2015 Wishlist? Leave them in the comments!