Why does Unicode encode a separate character for the final sigma in Greek? Doesn that violate the character-glyph model?
There are actually three reasons for this, all of which conspire to support the same result. First, there is very extensive legacy practice for handling Greek characters. And in most of the major Greek character encodings, a character for the final sigma and a character for the non-final sigma are distinguished. This includes IBM Code Pages 423, 851, and 869, Windows Code Page 1253, the HP Greek8 code page, ISO 8859-7, and the Macintosh Greek code page. Ignoring this legacy and failing to encode a separate lowercase final sigma and non-final sigma would just have resulted in major interoperability issues for Unicode and all preexisting Greek data in those character encodings. Second, the usability of a rendering model involving positional alternate glyphs for characters depends in part on the distribution and regularity of those forms in each particular script. The Arabic script is at one end of this continuum, since it is a cursive script, with predictable glyph shape variations for e
Related Questions
- If a character is in a unit and an enemy model with the Large special rule is slain, must the character do the final wound to gain the roll on the Eye of the Gods table?
- How do I encode special characters into their HTML character entity representation?
- Whats the difference between the two versions of the WinGreek Greek font?