Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

Why does Unicode encode a separate character for the final sigma in Greek? Doesn that violate the character-glyph model?

0
Posted

Why does Unicode encode a separate character for the final sigma in Greek? Doesn that violate the character-glyph model?

0

There are actually three reasons for this, all of which conspire to support the same result. First, there is very extensive legacy practice for handling Greek characters. And in most of the major Greek character encodings, a character for the final sigma and a character for the non-final sigma are distinguished. This includes IBM Code Pages 423, 851, and 869, Windows Code Page 1253, the HP Greek8 code page, ISO 8859-7, and the Macintosh Greek code page. Ignoring this legacy and failing to encode a separate lowercase final sigma and non-final sigma would just have resulted in major interoperability issues for Unicode and all preexisting Greek data in those character encodings. Second, the usability of a rendering model involving positional alternate glyphs for characters depends in part on the distribution and regularity of those forms in each particular script. The Arabic script is at one end of this continuum, since it is a cursive script, with predictable glyph shape variations for e

Related Questions

Thanksgiving questions

*Sadly, we had to bring back ads too. Hopefully more targeted.