encoding : Java Glossary
4 stars based on
The Encode module provides the interfaces between Perl's strings and the rest of the system. Perl strings are sequences of characters. The repertoire of characters that Perl can represent is at least that defined by the Unicode Consortium. On most platforms the ordinal values of the characters as returned by ord ch is the "Unicode codepoint" for the character the exceptions are those platforms where the legacy encoding is some variant of EBCDIC rather than a super-set of ASCII - see perlebcdic.
Traditionaly computer data has been moved around in 8-bit chunks often called "bytes". These chunks are also known as "octets" in networking standards. Perl is widely used to manipulate data of many types - not only strings of characters representing human or computer languages but also "binary" data being the machines representation of numbers, pixels in an image - or just about anything.
When Perl is processing "binary data" the programmer wants Perl to process "sequences of bytes". This is not a problem for Perl - as a byte has possible values it easily fits in Perl's much larger "logical character".
An encoding has a "repertoire" of characters that it can represent, and for each representable character there is at least one sequence of octets that represents it. Each character is a single octet so may have a repertoire of up to characters. Each character is two octets so may have a repertoire of up to 65 characters. Unicode's UCS-2 is an example. Also used for encodings for East Asian languages. Not really very "encoded" encodings.
The Unicode code points are just represented as 4-octet integers. None the less because different architectures use different representations of integers so called "endian" there at least two disctinct encodings. The number of octets needed to represent a character varies. UTF-8 is a particularly complex but regular case of a multi-byte encoding. Several East Asian countries use a multi-byte encoding where 1-octet is used to cover western roman characters and Asian characters get 2-octets.
UTF is strictly a multi-byte encoding taking either 2 or 4 octets to represent a Unicode code point. These encodings embed "escape sequences" into the octet sequence which describe how the following octets are to be interpreted.
Following the escape sequence octets are encoded by an "embedded" encoding which will be one of the above types until another problem reading binary file generated with ibm1047 encoding io and streams sequence switches to a different "embedded" encoding.
These schemes are very flexible and can handle mixed languages but are very complex to process and have state. No escape encodings are implemented for Perl yet. Encoding names are strings with characters taken from a restricted repertoire. Encoding names are case insensitive. White space in names is ignored. In addition an encoding may have aliases.
Each encoding has one "canonical" name. The "canonical" name is chosen from the names of the encoding by picking the first in the following sequence:. Because of all the alias issues, and because in the general case encodings have state Encode uses the encoding object internally once an operation is in progress.
Convert in-place the data between two encodings. Either using encode or through PerlIO: See "Encoding and IO". Note that because the conversion happens in place, the data to be converted cannot be a string constant, it must be a scalar variable.
If the data is supposed to be UTF-8, an optional lexical warning category utf8 is given. It would desirable to have a way to indicate that transform should use the encodings "replacement character" - no such mechanism is defined yet. This is not yet implemented as there are design issues with what its arguments should be and how it returns its results. Passed remaining fragment of string being processed.
This scheme is close to how underlying C code for Encode works, but gives the fixup routine very little context. Passed original string, and an index into it of the problem area, and output string so far.
Appends what it will to output string and returns new index into original string. This scheme gives maximal control to the fixup routine but is more complicated to code, and may need internals of Encode to be tweaked to keep original string intact. The Unicode consortium defines the UTF-8 standard as a way of encoding the entire Unicode repertiore as sequences of octets. This encoding is expected to become very widespread. Perl can use this form internaly to represent strings, so conversions to and from this form are particularly efficient as octets in memory do not have to change, just the meta-data that tells Perl how to treat them.
The characters that comprise string are encoded in Perl's superset of UTF-8 and the resulting octets returned as a sequence of bytes. All possible characters have a UTF-8 representation so this function cannot fail. Not all sequences of octets form valid UTF-8 encodings, so it is possible for this call to fail. UCS-2 can only represent Surrogates are code points set aside to encode the 0x The high surrogates are the range 0xD The surrogate encoding is.
Encode implements big-endian UCS-2 aliased to "iso" as that happens to be the name used by that representation when used with X11 fonts. Perl's logical characters can be considered as being in this form without encoding.
An encoding to transfer strings in this form e. It is very common to want to do encoding transformations when reading or writing files, network connections, pipes etc. If Perl is configured to use the new 'perlio' IO system then Encode provides a "layer" See perliol which can transform data as it is read or written. Either of the above forms of "layer" specifications can be made the default for a lexical scope with the use open Without any such configuration, or if Perl itself is built using system's own IO, then write operations assume that file handle accepts only bytes and will die if a character larger than is written to the handle.
When reading, each octet from the handle becomes a byte-in-a-character. Note that this default is the same behaviour as bytes-only languages including Perl before v5. In other cases it is the programs responsibility to transform characters into bytes using the API above before doing writes, and to transform the bytes read from a handle into characters before doing "character operations" e.
You can also use PerlIO to convert larger amounts of data you don't want to bring into memory. See PerlIO for more information. See problem reading binary file generated with ibm1047 encoding io and streams encoding for how to change the default encoding of the data in your script. The following API uses parts of Perl's internals in the current implementation. As such they are efficient, but may change.
Returns true if successful, false otherwise. Main reason for this routine is to allow Perl's testsuite to check that operations have left strings in a consistent state. Do not use frivolously. As mentioned above encodings are in the current implementation at least defined by objects. The values of the hash can currently be either strings or objects. The string form may go away in the future.
The string form occurs when encodings has scanned INC for loadable problem reading binary file generated with ibm1047 encoding io and streams but has not actually loaded the encoding in question. This is because the current "loading" process is all Perl and a bit slow.
Once an encoding is loaded then value of the hash is object which implements the encoding. The object should provide the following interface:. This is a placeholder for encodings with state. It should return an object which implements this problem reading binary file generated with ibm1047 encoding io and streams, all current implementations return the original object.
If check is is false then encode should make a "best effort" to convert the string - for example by using a replacement character. It should be noted that the check behaviour is different from the outer public API. The logic is that the "unchecked" case is useful when encoding is part of a stream which may be reporting errors e. In such cases it is desirable to get everything through somehow without causing additional errors which obscure the original one.
Also the encoding is best placed to know what the correct replacement character is, so if that is the desired behaviour then letting low level code do it is the most efficient.
In contrast if check is true, the scheme above allows the encoding to do as much as it can and tell layer above how much that was. What is lacking at present is a mechanism to report what went wrong. The most likely problem reading binary file generated with ibm1047 encoding io and streams will be an additional method call to the object, or perhaps to avoid forcing per-stream problem reading binary file generated with ibm1047 encoding io and streams on otherwise stateless encodings and additional parameter.
It is also highly desirable that encoding classes inherit from Encode:: Encoding as a base class. This allows that class to define additional behaviour for all encoding objects. They inherit their name method from Encode:: XS which provides the interface described above. Problem reading binary file generated with ibm1047 encoding io and streams calls a generic octet-sequence to octet-sequence "engine" that is driven by tables defined in encengine. The same engine is used for both encode and decode.
XS 's encode forces Perl's characters to their UTF-8 form and then treats them as just another multibyte encoding. XS 's decode transforms the sequence and then turns the UTFness flag as that is the form that the tables are defined to produce. For details of the engine see the comments in encengine.