Skip to content

Commit 3376144

Browse files
committed
gh-116666: Add glossary entry for token
1 parent 4f62189 commit 3376144

File tree

4 files changed

+23
-7
lines changed

4 files changed

+23
-7
lines changed

Doc/glossary.rst

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -800,6 +800,10 @@ Glossary
800800
thread removes *key* from *mapping* after the test, but before the lookup.
801801
This issue can be solved with locks or by using the EAFP approach.
802802

803+
lexical analyzer
804+
805+
Formal name for the *tokenizer*; see :term:`token`.
806+
803807
list
804808
A built-in Python :term:`sequence`. Despite its name it is more akin
805809
to an array in other languages than to a linked list since access to
@@ -1291,6 +1295,17 @@ Glossary
12911295
See also :term:`binary file` for a file object able to read and write
12921296
:term:`bytes-like objects <bytes-like object>`.
12931297

1298+
token
1299+
1300+
A small unit of source code, generated by the
1301+
:ref:`lexical analyzer <lexical>` (also called *tokenizer*).
1302+
Names, numbers, strings, operators,
1303+
newlines and similar are represented by tokens.
1304+
1305+
The :mod:`tokenize` module exposes Python's lexical analyzer.
1306+
The :mod:`token` module contains information on the various types
1307+
of tokens.
1308+
12941309
triple-quoted string
12951310
A string which is bound by three instances of either a quotation mark
12961311
(") or an apostrophe ('). While they don't provide any functionality

Doc/reference/lexical_analysis.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,9 @@ Lexical analysis
88
.. index:: lexical analysis, parser, token
99

1010
A Python program is read by a *parser*. Input to the parser is a stream of
11-
*tokens*, generated by the *lexical analyzer*. This chapter describes how the
12-
lexical analyzer breaks a file into tokens.
11+
:term:`tokens <token>`, generated by the *lexical analyzer* (also known as
12+
the *tokenizer*).
13+
This chapter describes how the lexical analyzer breaks a file into tokens.
1314

1415
Python reads program text as Unicode code points; the encoding of a source file
1516
can be given by an encoding declaration and defaults to UTF-8, see :pep:`3120`

Doc/tutorial/errors.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ complaint you get while you are still learning Python::
2424
SyntaxError: invalid syntax
2525

2626
The parser repeats the offending line and displays little arrows pointing
27-
at the token in the line where the error was detected. The error may be
27+
at the :term:`token` in the line where the error was detected. The error may be
2828
caused by the absence of a token *before* the indicated token. In the
2929
example, the error is detected at the function :func:`print`, since a colon
3030
(``':'``) is missing before it. File name and line number are printed so you

Doc/tutorial/interactive.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,10 @@ Alternatives to the Interactive Interpreter
3737

3838
This facility is an enormous step forward compared to earlier versions of the
3939
interpreter; however, some wishes are left: It would be nice if the proper
40-
indentation were suggested on continuation lines (the parser knows if an indent
41-
token is required next). The completion mechanism might use the interpreter's
42-
symbol table. A command to check (or even suggest) matching parentheses,
43-
quotes, etc., would also be useful.
40+
indentation were suggested on continuation lines (the parser knows if an
41+
:data:`~tokens.INDENT` token is required next). The completion mechanism might
42+
use the interpreter's symbol table. A command to check (or even suggest)
43+
matching parentheses, quotes, etc., would also be useful.
4444

4545
One alternative enhanced interactive interpreter that has been around for quite
4646
some time is IPython_, which features tab completion, object exploration and

0 commit comments

Comments
 (0)