Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

André G. Isaak

About

Username
André G. Isaak
Joined
Visits
533
Last Active
Roles
Member
Points
37
Posts
44
  • Re: Cyrillics I really need to bother with

    Yes, for full coverage of cyrillic you minimally need a combining dieresis, a combining breve (the cyrillic-looking kind), a combining macron, and a combing acute. I'm not sure about grave. Double-acute, double-grave, and inverted breve *might* be needed for serbian poetics but not for actual day to day use (they're used in the latin alphabet for this purpose, but I'm not 100% sure if they are used in cyrillic).
  • Re: Specific diacritic designs depending on language

    Well, I'm sure it's possible to design a set of generic accents which would be serviceable for all languages insofar as they are sufficiently differentiated, but that doesn't mean they will necessarily conform to the aesthetic preferences of speakers of all languages. In a language in which accents play a crucial role, the aesthetics of the accents will likely play a much stronger role in choosing a font then they would in (e.g.) English.

    Of course it is entirely up to the designer whether they want to go to the effort of making localized accents, but they might expand their fonts market by doing so.

    André
  • Re: [OT] Any way to select line breaks?

    It seems like you're tailoring your feature to a specific text -- your code is going to become massively convoluted if you try to accommodate every desired exception in your code rather than simply touching them up in the glyph palette. OpenType code really should be used for general solutions even if they are imperfect.

    André
  • Re: Mapping a Unicode range to another

    lookup ss01lookup {
      lookupflag 0;
        sub \u05D0 by \U0710 ;    # ALEF -> SYRIAC LETTER ALAPH
        sub \u05D1 by \U0712 ;    # BET -> SYRIAC LETTER BETH
        ...
    } ss01lookup;

    I think that the source of confusion here may reside in the (pseudo) syntax used above, where you appear to be identifying glyphs by the unicode values of their associated base characters.

    GSUB tables deal exclusively with glyph IDs, not with unicode values, so even if you write a substitution which *appears* to change the underlying character, it really does no such thing -- it simply replaces one GID with another leaving the underlying character (and hence unicode value) unchanged.

    As an example, consider the following (rather pointless) feature:

    feature ss01 { # ROT-13
       sub [A B C D E F G H I J K L M N O P Q R S T U V W X Y Z] by
          [N O P Q R S T U V W X Y Z A B C D E F G H I J K L M];
    } feature ss01;

    This would implement ROT-13 within a font and applying this feature would result in text which looks like gibberish.

    So, for example, "THE QUICK BROWN FOX" would be rendered as "GUR DHVPX OEBJA SBK".

    However, if you were to apply this feature and then run your spell checker, it wouldn't find any errors because the applications program would still see this as 'THE QUICK BROWN FOX'. Similarly, in your example above, you can map alef to alaph, but anything outside the font (including the shaping engine) is still going to see this as alef (U05D0). All of the substitutions performed by your GSUB table take place after the shaping engine is already done its work.

    André
  • Re: Designing workflow

    Let me suggest an experiment:

    Don't bother decomposing composites (that doesn't make much sense for .ttfs), but before generating your .ttf in FontLab Studio, make sure you first select all glyphs in the font and then select paths->set tt direction from the contour menu. See if that helps with your hinting issues.

    Then you wouldn't need to use anything other than FontLab (i'm not clear why TransType is needed since you can set the family name in FontLab directly).