Skip to content

Commit 227d1b2

Browse files
[llvm] Proofread YamlIO.rst (#150475)
This patch only adds double backticks around code-related terms to facilitate the review process.
1 parent 8b8b0f1 commit 227d1b2

File tree

1 file changed

+68
-68
lines changed

1 file changed

+68
-68
lines changed

llvm/docs/YamlIO.rst

Lines changed: 68 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ corresponding denormalization step.
9292
YAML I/O uses a non-invasive, traits based design. YAML I/O defines some
9393
abstract base templates. You specialize those templates on your data types.
9494
For instance, if you have an enumerated type FooBar you could specialize
95-
ScalarEnumerationTraits on that type and define the enumeration() method:
95+
ScalarEnumerationTraits on that type and define the ``enumeration()`` method:
9696

9797
.. code-block:: c++
9898

@@ -113,7 +113,7 @@ values and the YAML string representation is only in one place.
113113
This assures that the code for writing and parsing of YAML stays in sync.
114114

115115
To specify a YAML mappings, you define a specialization on
116-
llvm::yaml::MappingTraits.
116+
``llvm::yaml::MappingTraits``.
117117
If your native data structure happens to be a struct that is already normalized,
118118
then the specialization is simple. For example:
119119

@@ -131,9 +131,9 @@ then the specialization is simple. For example:
131131
};
132132

133133

134-
A YAML sequence is automatically inferred if you data type has begin()/end()
135-
iterators and a push_back() method. Therefore any of the STL containers
136-
(such as std::vector<>) will automatically translate to YAML sequences.
134+
A YAML sequence is automatically inferred if you data type has ``begin()``/``end()``
135+
iterators and a ``push_back()`` method. Therefore any of the STL containers
136+
(such as ``std::vector<>``) will automatically translate to YAML sequences.
137137

138138
Once you have defined specializations for your data types, you can
139139
programmatically use YAML I/O to write a YAML document:
@@ -195,8 +195,8 @@ Error Handling
195195
==============
196196

197197
When parsing a YAML document, if the input does not match your schema (as
198-
expressed in your XxxTraits<> specializations). YAML I/O
199-
will print out an error message and your Input object's error() method will
198+
expressed in your ``XxxTraits<>`` specializations). YAML I/O
199+
will print out an error message and your Input object's ``error()`` method will
200200
return true. For instance the following document:
201201

202202
.. code-block:: yaml
@@ -265,8 +265,8 @@ operators to and from the base type. For example:
265265
LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags)
266266

267267
This generates two classes MyFooFlags and MyBarFlags which you can use in your
268-
native data structures instead of uint32_t. They are implicitly
269-
converted to and from uint32_t. The point of creating these unique types
268+
native data structures instead of ``uint32_t``. They are implicitly
269+
converted to and from ``uint32_t``. The point of creating these unique types
270270
is that you can now specify traits on them to get different YAML conversions.
271271

272272
Hex types
@@ -280,15 +280,15 @@ format used by the built-in integer types:
280280
* Hex16
281281
* Hex8
282282

283-
You can use llvm::yaml::Hex32 instead of uint32_t and the only different will
283+
You can use ``llvm::yaml::Hex32`` instead of ``uint32_t`` and the only different will
284284
be that when YAML I/O writes out that type it will be formatted in hexadecimal.
285285

286286

287287
ScalarEnumerationTraits
288288
-----------------------
289289
YAML I/O supports translating between in-memory enumerations and a set of string
290-
values in YAML documents. This is done by specializing ScalarEnumerationTraits<>
291-
on your enumeration type and define an enumeration() method.
290+
values in YAML documents. This is done by specializing ``ScalarEnumerationTraits<>``
291+
on your enumeration type and define an ``enumeration()`` method.
292292
For instance, suppose you had an enumeration of CPUs and a struct with it as
293293
a field:
294294

@@ -333,9 +333,9 @@ as a field type:
333333
};
334334

335335
When reading YAML, if the string found does not match any of the strings
336-
specified by enumCase() methods, an error is automatically generated.
336+
specified by ``enumCase()`` methods, an error is automatically generated.
337337
When writing YAML, if the value being written does not match any of the values
338-
specified by the enumCase() methods, a runtime assertion is triggered.
338+
specified by the ``enumCase()`` methods, a runtime assertion is triggered.
339339

340340

341341
BitValue
@@ -442,10 +442,10 @@ Sometimes for readability a scalar needs to be formatted in a custom way. For
442442
instance your internal data structure may use an integer for time (seconds since
443443
some epoch), but in YAML it would be much nicer to express that integer in
444444
some time format (e.g. 4-May-2012 10:30pm). YAML I/O has a way to support
445-
custom formatting and parsing of scalar types by specializing ScalarTraits<> on
445+
custom formatting and parsing of scalar types by specializing ``ScalarTraits<>`` on
446446
your data type. When writing, YAML I/O will provide the native type and
447-
your specialization must create a temporary llvm::StringRef. When reading,
448-
YAML I/O will provide an llvm::StringRef of scalar and your specialization
447+
your specialization must create a temporary ``llvm::StringRef``. When reading,
448+
YAML I/O will provide an ``llvm::StringRef`` of scalar and your specialization
449449
must convert that to your native data type. An outline of a custom scalar type
450450
looks like:
451451

@@ -482,15 +482,15 @@ literal block notation, just like the example shown below:
482482
Second line
483483
484484
The YAML I/O library provides support for translating between YAML block scalars
485-
and specific C++ types by allowing you to specialize BlockScalarTraits<> on
485+
and specific C++ types by allowing you to specialize ``BlockScalarTraits<>`` on
486486
your data type. The library doesn't provide any built-in support for block
487-
scalar I/O for types like std::string and llvm::StringRef as they are already
487+
scalar I/O for types like ``std::string`` and ``llvm::StringRef`` as they are already
488488
supported by YAML I/O and use the ordinary scalar notation by default.
489489

490490
BlockScalarTraits specializations are very similar to the
491491
ScalarTraits specialization - YAML I/O will provide the native type and your
492-
specialization must create a temporary llvm::StringRef when writing, and
493-
it will also provide an llvm::StringRef that has the value of that block scalar
492+
specialization must create a temporary ``llvm::StringRef`` when writing, and
493+
it will also provide an ``llvm::StringRef`` that has the value of that block scalar
494494
and your specialization must convert that to your native data type when reading.
495495
An example of a custom type with an appropriate specialization of
496496
BlockScalarTraits is shown below:
@@ -524,7 +524,7 @@ Mappings
524524
========
525525

526526
To be translated to or from a YAML mapping for your type T you must specialize
527-
llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)"
527+
``llvm::yaml::MappingTraits`` on T and implement the "void mapping(IO &io, T&)"
528528
method. If your native data structures use pointers to a class everywhere,
529529
you can specialize on the class pointer. Examples:
530530

@@ -585,7 +585,7 @@ No Normalization
585585

586586
The ``mapping()`` method is responsible, if needed, for normalizing and
587587
denormalizing. In a simple case where the native data structure requires no
588-
normalization, the mapping method just uses mapOptional() or mapRequired() to
588+
normalization, the mapping method just uses ``mapOptional()`` or ``mapRequired()`` to
589589
bind the struct's fields to YAML key names. For example:
590590

591591
.. code-block:: c++
@@ -605,11 +605,11 @@ bind the struct's fields to YAML key names. For example:
605605
Normalization
606606
----------------
607607

608-
When [de]normalization is required, the mapping() method needs a way to access
608+
When [de]normalization is required, the ``mapping()`` method needs a way to access
609609
normalized values as fields. To help with this, there is
610-
a template MappingNormalization<> which you can then use to automatically
610+
a template ``MappingNormalization<>`` which you can then use to automatically
611611
do the normalization and denormalization. The template is used to create
612-
a local variable in your mapping() method which contains the normalized keys.
612+
a local variable in your ``mapping()`` method which contains the normalized keys.
613613

614614
Suppose you have native data type
615615
Polar which specifies a position in polar coordinates (distance, angle):
@@ -629,7 +629,7 @@ is, you want the yaml to look like:
629629
x: 10.3
630630
y: -4.7
631631
632-
You can support this by defining a MappingTraits that normalizes the polar
632+
You can support this by defining a ``MappingTraits`` that normalizes the polar
633633
coordinates to x,y coordinates when writing YAML and denormalizes x,y
634634
coordinates into polar when reading YAML.
635635

@@ -667,62 +667,62 @@ coordinates into polar when reading YAML.
667667
};
668668

669669
When writing YAML, the local variable "keys" will be a stack allocated
670-
instance of NormalizedPolar, constructed from the supplied polar object which
671-
initializes it x and y fields. The mapRequired() methods then write out the x
670+
instance of ``NormalizedPolar``, constructed from the supplied polar object which
671+
initializes it x and y fields. The ``mapRequired()`` methods then write out the x
672672
and y values as key/value pairs.
673673

674674
When reading YAML, the local variable "keys" will be a stack allocated instance
675-
of NormalizedPolar, constructed by the empty constructor. The mapRequired
675+
of ``NormalizedPolar``, constructed by the empty constructor. The ``mapRequired()``
676676
methods will find the matching key in the YAML document and fill in the x and y
677-
fields of the NormalizedPolar object keys. At the end of the mapping() method
678-
when the local keys variable goes out of scope, the denormalize() method will
677+
fields of the ``NormalizedPolar`` object keys. At the end of the ``mapping()`` method
678+
when the local keys variable goes out of scope, the ``denormalize()`` method will
679679
automatically be called to convert the read values back to polar coordinates,
680-
and then assigned back to the second parameter to mapping().
680+
and then assigned back to the second parameter to ``mapping()``.
681681

682682
In some cases, the normalized class may be a subclass of the native type and
683-
could be returned by the denormalize() method, except that the temporary
683+
could be returned by the ``denormalize()`` method, except that the temporary
684684
normalized instance is stack allocated. In these cases, the utility template
685-
MappingNormalizationHeap<> can be used instead. It just like
686-
MappingNormalization<> except that it heap allocates the normalized object
687-
when reading YAML. It never destroys the normalized object. The denormalize()
685+
``MappingNormalizationHeap<>`` can be used instead. It just like
686+
``MappingNormalization<>`` except that it heap allocates the normalized object
687+
when reading YAML. It never destroys the normalized object. The ``denormalize()``
688688
method can this return "this".
689689

690690

691691
Default values
692692
--------------
693-
Within a mapping() method, calls to io.mapRequired() mean that that key is
693+
Within a ``mapping()`` method, calls to ``io.mapRequired()`` mean that that key is
694694
required to exist when parsing YAML documents, otherwise YAML I/O will issue an
695695
error.
696696

697-
On the other hand, keys registered with io.mapOptional() are allowed to not
697+
On the other hand, keys registered with ``io.mapOptional()`` are allowed to not
698698
exist in the YAML document being read. So what value is put in the field
699699
for those optional keys?
700700
There are two steps to how those optional fields are filled in. First, the
701-
second parameter to the mapping() method is a reference to a native class. That
701+
second parameter to the ``mapping()`` method is a reference to a native class. That
702702
native class must have a default constructor. Whatever value the default
703703
constructor initially sets for an optional field will be that field's value.
704-
Second, the mapOptional() method has an optional third parameter. If provided
705-
it is the value that mapOptional() should set that field to if the YAML document
704+
Second, the ``mapOptional()`` method has an optional third parameter. If provided
705+
it is the value that ``mapOptional()`` should set that field to if the YAML document
706706
does not have that key.
707707

708708
There is one important difference between those two ways (default constructor
709-
and third parameter to mapOptional). When YAML I/O generates a YAML document,
710-
if the mapOptional() third parameter is used, if the actual value being written
709+
and third parameter to ``mapOptional()``). When YAML I/O generates a YAML document,
710+
if the ``mapOptional()`` third parameter is used, if the actual value being written
711711
is the same as (using ==) the default value, then that key/value is not written.
712712

713713

714714
Order of Keys
715715
--------------
716716

717717
When writing out a YAML document, the keys are written in the order that the
718-
calls to mapRequired()/mapOptional() are made in the mapping() method. This
718+
calls to ``mapRequired()``/``mapOptional()`` are made in the ``mapping()`` method. This
719719
gives you a chance to write the fields in an order that a human reader of
720720
the YAML document would find natural. This may be different that the order
721721
of the fields in the native class.
722722

723723
When reading in a YAML document, the keys in the document can be in any order,
724-
but they are processed in the order that the calls to mapRequired()/mapOptional()
725-
are made in the mapping() method. That enables some interesting
724+
but they are processed in the order that the calls to ``mapRequired()``/``mapOptional()``
725+
are made in the ``mapping()`` method. That enables some interesting
726726
functionality. For instance, if the first field bound is the cpu and the second
727727
field bound is flags, and the flags are cpu specific, you can programmatically
728728
switch how the flags are converted to and from YAML based on the cpu.
@@ -761,7 +761,7 @@ model. Recently, we added support to YAML I/O for checking/setting the optional
761761
tag on a map. Using this functionality it is even possible to support different
762762
mappings, as long as they are convertible.
763763

764-
To check a tag, inside your mapping() method you can use io.mapTag() to specify
764+
To check a tag, inside your ``mapping()`` method you can use ``io.mapTag()`` to specify
765765
what the tag should be. This will also add that tag when writing yaml.
766766

767767
Validation
@@ -834,7 +834,7 @@ Sequence
834834
========
835835

836836
To be translated to or from a YAML sequence for your type T you must specialize
837-
llvm::yaml::SequenceTraits on T and implement two methods:
837+
``llvm::yaml::SequenceTraits`` on T and implement two methods:
838838
``size_t size(IO &io, T&)`` and
839839
``T::value_type& element(IO &io, T&, size_t indx)``. For example:
840840

@@ -846,10 +846,10 @@ llvm::yaml::SequenceTraits on T and implement two methods:
846846
static MySeqEl &element(IO &io, MySeq &list, size_t index) { ... }
847847
};
848848

849-
The size() method returns how many elements are currently in your sequence.
850-
The element() method returns a reference to the i'th element in the sequence.
851-
When parsing YAML, the element() method may be called with an index one bigger
852-
than the current size. Your element() method should allocate space for one
849+
The ``size()`` method returns how many elements are currently in your sequence.
850+
The ``element()`` method returns a reference to the i'th element in the sequence.
851+
When parsing YAML, the ``element()`` method may be called with an index one bigger
852+
than the current size. Your ``element()`` method should allocate space for one
853853
more element (using default constructor if element is a C++ object) and returns
854854
a reference to that new allocated space.
855855

@@ -881,10 +881,10 @@ configuration.
881881

882882
Utility Macros
883883
--------------
884-
Since a common source of sequences is std::vector<>, YAML I/O provides macros:
885-
LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which
886-
can be used to easily specify SequenceTraits<> on a std::vector type. YAML
887-
I/O does not partial specialize SequenceTraits on std::vector<> because that
884+
Since a common source of sequences is ``std::vector<>``, YAML I/O provides macros:
885+
``LLVM_YAML_IS_SEQUENCE_VECTOR()`` and ``LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR()`` which
886+
can be used to easily specify ``SequenceTraits<>`` on a ``std::vector`` type. YAML
887+
I/O does not partial specialize ``SequenceTraits`` on ``std::vector<>`` because that
888888
would force all vectors to be sequences. An example use of the macros:
889889

890890
.. code-block:: c++
@@ -906,7 +906,7 @@ have need for multiple documents. The top level node in their YAML schema
906906
will be a mapping or sequence. For those cases, the following is not needed.
907907
But for cases where you do want multiple documents, you can specify a
908908
trait for you document list type. The trait has the same methods as
909-
SequenceTraits but is named DocumentListTraits. For example:
909+
``SequenceTraits`` but is named ``DocumentListTraits``. For example:
910910

911911
.. code-block:: c++
912912

@@ -919,16 +919,16 @@ SequenceTraits but is named DocumentListTraits. For example:
919919

920920
User Context Data
921921
=================
922-
When an llvm::yaml::Input or llvm::yaml::Output object is created their
922+
When an ``llvm::yaml::Input`` or ``llvm::yaml::Output`` object is created their
923923
constructors take an optional "context" parameter. This is a pointer to
924924
whatever state information you might need.
925925

926926
For instance, in a previous example we showed how the conversion type for a
927927
flags field could be determined at runtime based on the value of another field
928928
in the mapping. But what if an inner mapping needs to know some field value
929929
of an outer mapping? That is where the "context" parameter comes in. You
930-
can set values in the context in the outer map's mapping() method and
931-
retrieve those values in the inner map's mapping() method.
930+
can set values in the context in the outer map's ``mapping()`` method and
931+
retrieve those values in the inner map's ``mapping()`` method.
932932

933933
The context value is just a void*. All your traits which use the context
934934
and operate on your native data types, need to agree what the context value
@@ -939,9 +939,9 @@ traits use to shared context sensitive information.
939939
Output
940940
======
941941

942-
The llvm::yaml::Output class is used to generate a YAML document from your
942+
The ``llvm::yaml::Output`` class is used to generate a YAML document from your
943943
in-memory data structures, using traits defined on your data types.
944-
To instantiate an Output object you need an llvm::raw_ostream, an optional
944+
To instantiate an Output object you need an ``llvm::raw_ostream``, an optional
945945
context pointer and an optional wrapping column:
946946

947947
.. code-block:: c++
@@ -957,7 +957,7 @@ streaming as YAML is a mapping, scalar, or sequence, then Output assumes you
957957
are generating one document and wraps the mapping output
958958
with "``---``" and trailing "``...``".
959959

960-
The WrapColumn parameter will cause the flow mappings and sequences to
960+
The ``WrapColumn`` parameter will cause the flow mappings and sequences to
961961
line-wrap when they go over the supplied column. Pass 0 to completely
962962
suppress the wrapping.
963963

@@ -980,7 +980,7 @@ The above could produce output like:
980980
...
981981
982982
On the other hand, if the top level data structure you are streaming as YAML
983-
has a DocumentListTraits specialization, then Output walks through each element
983+
has a ``DocumentListTraits`` specialization, then Output walks through each element
984984
of your DocumentList and generates a "---" before the start of each element
985985
and ends with a "...".
986986

@@ -1008,9 +1008,9 @@ The above could produce output like:
10081008
Input
10091009
=====
10101010

1011-
The llvm::yaml::Input class is used to parse YAML document(s) into your native
1011+
The ``llvm::yaml::Input`` class is used to parse YAML document(s) into your native
10121012
data structures. To instantiate an Input
1013-
object you need a StringRef to the entire YAML file, and optionally a context
1013+
object you need a ``StringRef`` to the entire YAML file, and optionally a context
10141014
pointer:
10151015

10161016
.. code-block:: c++
@@ -1024,7 +1024,7 @@ the document(s). If you expect there might be multiple YAML documents in
10241024
one file, you'll need to specialize DocumentListTraits on a list of your
10251025
document type and stream in that document list type. Otherwise you can
10261026
just stream in the document type. Also, you can check if there was
1027-
any syntax errors in the YAML be calling the error() method on the Input
1027+
any syntax errors in the YAML be calling the ``error()`` method on the Input
10281028
object. For example:
10291029

10301030
.. code-block:: c++

0 commit comments

Comments
 (0)