char is a primitive or intrinsic data type (versus a user-defined type) representing individual characters in the C, C++, Java and C# programming languages.

In C and C++, a char customarily occupies one byte and may contain values from -127 to 127. An unsigned char variable can contain values from 0 to 255. These numbers are mapped using the American Standard Code for Information Interchange (ASCII) to the uppercase and lowercase alphabet, numbers, punctuation and some control characters. Because a C/C++ char is just one byte, it can only store characters from the Latin alphabet, with almost no support for internationalization. However, because a C/C++ char is really just a one-byte number for all intents and purposes, there is an interesting duality between int and char variables, especially in ancient C code. Disclaimer: This account of the char data type covers only commodity compilers and architectures at the time of writing, namely GNU and Microsoft compilers on the x86 and PowerPC architectures. More exotic architectures such as the IBM mainframe family or the Cray supercomputer are not included because of the limits of the author's knowledge. Additionally, this writeup makes no claims of explaining the specification of the char type as described in the technical standard. In fact, even the one-byte size of char may not be guaranteed. The C++ standard states only that sizeof (char) shouldn't exceed sizeof (short), which shouldn't exceed sizeof (int), which in turn shouldn't exceed sizeof (long).

In Java and C#, a char variable is a numeric primitive type that occupies two bytes. Using the 16 bits that a two-byte char occupies, Unicode values from non-Latin alphabets, such as Cyrillic, East Asian and various Indic alphabets may be encoded into a char variable. However, because a char now has specific meaning as a character with a particular encoding in a certain alphabet, and not just a number, one notices the char/int duality a lot less in Java and C#. The byte data type in Java and C#, unsigned by definition, corresponds best to the usage of char in old C code. C# also has a signed byte data type called sbyte.

Should char be pronounced with a hard initial k sound, in keeping with its origin as an abbreviation for character, or should it be subject to customary English pronunciation and be spoken like the char part of charred? I, for one, prefer the latter because that's how I would pronounce it as a standalone English word. However, several other developers I know prefer to hark back to its origins by pronouncing it with a hard initial k sound.