Just some personal opinion, not the views of my employer or anyone else I may be associated with:
Adobe Max has a series of "woo" lightning talks called Sneak Peaks that demo what their r&d folks can do with machine learning to make design jobs... what a capitalist might call... "more productive," and there's naturally a font one.
No actual wooing,
like in 2015, but it's very impressive. Congratulations to the folks who worked on it!
https://youtu.be/eTK7bmTM7mUyoutu.be/eTK7bmTM7mU
It looks like it's mainly "style transfer" a la
pix2pix: you draw 1 glyph - in color, as a bitmap, or as a vector - and can be extrapolated into a whole typeface, by using auto trace, auto space, auto kern, auto build, auto install stuff. The final demo even throws in some Augmented Reality live video image replacement code to do it all in apparent real time for good measure.
The first demo, where a sans serif font, presumably owned by Adobe, is converted to vector outlines on the artboard, has holes punched in one glyph like Swiss cheese, and then can be drag and dropped back onto the text element to reapply the hole punching via style transfer to the text element, where arbitrary new text can be typed with the derivative typeface, poses a kind of
trolley problem for font licensing. I wonder how the OFL will play out there, since derivatives must remain OFL.
The second demo, where a phone camera snap of one of those ubiquitous and iconic sign painted trucks of India, is Fontphorified into a working font just fast enough you can see glyph being generated, will surely be instantaneous soon enough. I guess even today if one used the technology used to make Xbox games work on your phone. But I guess in the USA where letterform designs are completely public domain, there's no licensing issue at all with that.
I wonder how good the results are when you point your Fontphoria iPad at the American Type Founders specimen books.
And what this would look like as a font format. My guess is that this will not be forming the basis of a CFF3 proposal any time soon, though.
I didn't look too closely but I believe Adobe isn't actually announcing these Sneak Peaks as features shipping in Creative Cloud next month. It's more a hint of what can be done with state of the art computer graphics software engineering. I suspect a lot of it is what a savvy developer can do in a few weeks with the right idea and the libre licensed machine learning stuff lying around GitHub.
As they say, the future is already here, it just isn't evenly distributed yet.
After posting this I perused the
Twitter hashtag feed #fontphoria - interesting reading... Some tweets touch upon the productivity ("saving time") and it turns out the research code was already posted on Github under a libre license
Interesting times!
Comments
The AI even introduces random typos to make the results look more human. Or did somebody mess up the mock-up?
paper: https://arxiv.org/abs/1712.00516
code: https://github.com/azadis/MC-GAN https://youtu.be/eTK7bmTM7mU
What's most striking in the numerous illustrations in the paper showing the results alongside the original designs — the 'Ground Truth' — is just how bad that prediction is. While the process is able to make proportional adjustments, it appears to have only one idea of structure. The use of colour fonts seems to me a red-herring, because one can easily be distracted by the fact that the process gets the colours in roughly the right places, whereas if one were looking at all these shapes as black glyphs, it would be more evident that this is so far just a way to make 90s grunge fonts.
I think it is easy to either/both overestimate the near-term impact of this (yes, probably pretty darn low), or underestimate how much machine learning approaches to type design and generalizing from a handful of inputs might improve (in three, eight, or twenty years).
https://youtu.be/NsfbLjNF6zk