I have thought about a peculiar possibility for some time:
- given that we now have variable font technology
- and that we can blend the outlines of a two fonts to create a new font
- and that there is, for historical reasons, similarity in the shapes of many fonts
is it possible that at some point in the future all existing fonts will be interconnected so they can be freely blended? Talking about a single font as we understand it today would be meaningless: Clarendon for example would just be a single point on a web. It would be surrounded by points that are, say, 90% Clarendon and 10% Garamond, 80% Clarendon and 20% Garamond, 90% Clarendon and 10% Bodoni, 80% Clarendon and 20% Bodoni, and so on. Each point will be able to vary in width and heaviness. This system will be further refined so it works for italic and upright, making the points binary, and also works in such a way, that dissimilar fonts, like, say, a dingbat and Futura, or some daFont rejects, will not be able to be blended, or make the list at all (it would obviously be harder to blend serifs to sans serifs than serifs to serifs and sans to sans, but many fonts simply do not cut the mustard. Of course there will always be enthusuasts/blendmonkeys to blend KaiserzeitGotisch with Logger or DeathMetalGutturalFont).
Is all of this a real possibility for future technology, or rather not? What do you think?
Comments
That is, if their social media identities are free of blemish.
https://www.youtube.com/watch?v=eZiP_TIa96g
And what looks hip and what looks dopey is super subjective.... But I think you are exactly right that it's dopey to me, because it looks super dated
I always think of Marshall McLuhan with this stuff.
I question the premise of Vasil's timing. This isn't particularly about the future, or font APIs, or OpenType v1.8....
I'd say phototype technology deterministically made super tightly spaced type so easy to do - past the point of good readability but equally beyond what previous type technologies could do so easily. People ignorant of the dominant culture determined by previous type technology were not constrained by that thinking and did what they hadn't seen before, which looked dopey to the metal minded people.
In the same way, I think the initial adobe-apple-aldus-altsys dtp technology deterministically made grunge type so easy to do, and the same thing happened, to the chagrin of the phototype minded people. There were, I think, some interesting class dynamics going on too.
So what's interesting for me here now, is that this GX stuff is older than my sister, so the media determinism has already played out in Latin typeface culture: The people making text type in that time used early interpolation tech to publish "semi serif" styles within closely coupled sans/serif superfamilies, and the fashion appetite for semi serifs came and went.
By the mid 90s there were several tools like FontChameleon and InfiniFont for end users to frankenfont Helvetica to Bodoni to their hearts content.
So I don't expect the kind of "supervariable" fonts Vasil describes to be very culturally interesting.
Underware made one from the fonts in their collection and presented it at TypoLabs (and elsewhere) earlier this year, and the video is on YouTube Most of the latent space is garbage, not even dopey
These techniques are being used in some specialty application like astronomy, medicine or geography, but not really in end-user-facing contexts.
For fonts, such techniques may be useful for font fallback or digitization of printed matter, but I don't think they'll become a mainstream application.
http://www.bbc.com/news/av/technology-41935721/why-these-faces-do-not-belong-to-real-people
We have had these capabilities repeatedly over the years. Infinifont, FontChameleon. Etcetera.
Early versions of such tech required that the fonts be manually adapted into the system. Still, you could make a FontChameleon "descriptor" for a single style of a font in about a day. Maybe a couple of days to a week depending on how precise you needed to be.
Once you had that, you could blend to your heart’s content.
But you know what? Designers/users did not go gaga over this technology. It just didn't change the world. Maybe that's because Adobe bought FontChameleon to use as a ROM font compression tech so it didn't have full opportunity in the marketplace. But I don't think that's the only reason. Unless you had a particular need, it was just a nice toy.
FontLab VI allows you to blend nearly arbitrary fonts, too, as long as they are structurally compatible. It does this without the same amount of setup it used to require, which is a neat feature, and will come in super handy. But at the end of the day, that is just a tool. Better than the previous blend tool, but not a revolution in and of itself.
But I am glad I posted the thread, the answers are very insightful.
I think the cost of creating and operating a system that could synthesize all songs in the world would be higher than just serving the plain recordings of all these songs that already exist.
Of course it is and will be possible to use clever outline compatibilization algorithms to serve interpolations between many standalone fonts. But such a server will most likely still just hold the existing fonts, and then compatibilization and interpolation would be attempted only upon request (and then perhaps cached).
Time and money have nothing to do with it .
And I don't think wrapping billions of vars in a file is the practical solution to most clouds, but I liked it.
A mentioned company already threw infinite cash at a mentioned universal font project, had the technology, tools, detailed parameters, fonts and font library, and still ended up at a cut-du-sac by tossing the parameters, and obeying only the metadata needs of a legacy format or two.
It's much cheaper, faster and close to infinitely more practical today to expose systematically discovered parameters of existing fonts and systematically calculated parameters from variable fonts in one kind of data file, where the only difference is, in existing fonts the discovered values are of discrete instances, and in variable fonts the values calculated are samples from a range.
The format exists to do either the monster font or the gathering of values, up to a point, and then, (for either monstering or gathering), one must go beyond the legacy metadata for the needs of products and services from a parametrically related database of fonts and font families.
That's not how it would be done; the software would be clever enough to figure that out on the fly. Some results would indeed be ridiculous, but the proportion just has to be low enough for most users to not mind.