Add man pages
This commit is contained in:
parent
28b4f3c074
commit
ef7cb5a562
12
README.md
12
README.md
|
@ -77,15 +77,15 @@ If you want to add a comment to the file, make sure the line starts with a `#` a
|
|||
|
||||
Consider the following example:
|
||||
|
||||
Capital_letters = [A-Z]
|
||||
Numbers = [0-9]
|
||||
CAPITAL = [A-Z]
|
||||
NUMBER = [0-9]
|
||||
|
||||
# This is a comment
|
||||
All_letters = [a-zA-Z]
|
||||
ALL = [a-zA-Z]
|
||||
|
||||
Here we have 3 different tokens `Capital_letters`, `Numbers` and `All_letters`.
|
||||
Note that the names for the tokens only consist of capital letters, small letter and underscores, other characters are not accepted.
|
||||
When we run `A` through the generated lexer, it will return that it's a `Capital_letter`, since it is specified higher than `All_letters`.
|
||||
Here we have 3 different tokens `CAPITAL`, `NUMBER` and `ALL`.
|
||||
Note that the names for the tokens only consist of capital letters, small letter and underscores, other characters are not recommended, in order to work for most possible backends.
|
||||
When we run `A` through the generated lexer, it will return that it's a `CAPITAL`, since that is specified higher than `ALL`.
|
||||
|
||||
### Regular expressions
|
||||
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
.\" generated with Ronn/v0.7.3
|
||||
.\" http://github.com/rtomayko/ronn/tree/0.7.3
|
||||
.
|
||||
.TH "LEXESIS" "1" "May 2016" "" ""
|
||||
.
|
||||
.SH "NAME"
|
||||
\fBLexesis\fR \- A language agnostic lexical analyser generator
|
||||
.
|
||||
.SH "SYNOPSIS"
|
||||
\fBLexesis\fR [\fB\-d\fR \fIoutputdir\fR] [\fB\-l\fR \fIlanguage\fR] [\fB\-n\fR \fIlexername\fR] <inputfile\.lxs>
|
||||
.
|
||||
.SH "DESCRIPTION"
|
||||
Generate a lexical analyser from a Lexesis(5) rules file
|
||||
.
|
||||
.P
|
||||
Options:
|
||||
.
|
||||
.TP
|
||||
\fB\-h\fR, \fB\-\-help\fR
|
||||
show a help message and exit
|
||||
.
|
||||
.TP
|
||||
\fB\-\-version\fR
|
||||
show program\'s version number and exit
|
||||
.
|
||||
.TP
|
||||
\fB\-d\fR \fIdirectory\fR, \fB\-\-outputdir\fR=\fIdirectory\fR
|
||||
Output the generated files to this directory
|
||||
.
|
||||
.br
|
||||
[default: \.]
|
||||
.
|
||||
.TP
|
||||
\fB\-l\fR \fIlanguage\fR, \fB\-\-lang\fR=\fIlanguage\fR, \fB\-\-language\fR=\fIlanguage\fR
|
||||
The programming language to generate source files for
|
||||
.
|
||||
.br
|
||||
[default: c++]
|
||||
.
|
||||
.TP
|
||||
\fB\-n\fR \fIlexername\fR, \fB\-\-name\fR=\fIlexername\fR
|
||||
Use this name for the generated lexer, the default is
|
||||
.
|
||||
.br
|
||||
based on the input file name
|
||||
.
|
||||
.SH "EXAMPLES"
|
||||
\fBLexesis \-l c++ \-d lexers \-n MyLexer lexer\.lxs\fR
|
||||
.
|
||||
.P
|
||||
\fBLexesis \-\-language c++ \-\-outputdir lexers \-\-name MyLexer lexer\.lxs\fR
|
||||
.
|
||||
.SH "AUTHORS"
|
||||
.
|
||||
.IP "\(bu" 4
|
||||
Thomas Avé
|
||||
.
|
||||
.IP "\(bu" 4
|
||||
Robin Jadoul
|
||||
.
|
||||
.IP "" 0
|
||||
.
|
||||
.SH "SEE ALSO"
|
||||
Lexesis(5)
|
|
@ -0,0 +1,52 @@
|
|||
Lexesis(1) -- A language agnostic lexical analyser generator
|
||||
============================================================
|
||||
|
||||
SYNOPSIS
|
||||
--------
|
||||
|
||||
`Lexesis` [`-d` <outputdir>] [`-l` <language>] [`-n` <lexername>] <inputfile.lxs>
|
||||
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
Generate a lexical analyser from a Lexesis(5) rules file
|
||||
|
||||
Options:
|
||||
|
||||
* `-h`, `--help`:
|
||||
show a help message and exit
|
||||
|
||||
* `--version`:
|
||||
show program's version number and exit
|
||||
|
||||
* `-d` <directory>, `--outputdir`=<directory>:
|
||||
Output the generated files to this directory
|
||||
[default: .]
|
||||
|
||||
* `-l` <language>, `--lang`=<language>, `--language`=<language>:
|
||||
The programming language to generate source files for
|
||||
[default: c++]
|
||||
|
||||
* `-n` <lexername>, `--name`=<lexername>:
|
||||
Use this name for the generated lexer, the default is
|
||||
based on the input file name
|
||||
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
|
||||
`Lexesis -l c++ -d lexers -n MyLexer lexer.lxs`
|
||||
|
||||
`Lexesis --language c++ --outputdir lexers --name MyLexer lexer.lxs`
|
||||
|
||||
AUTHORS
|
||||
-------
|
||||
|
||||
* Thomas Avé
|
||||
* Robin Jadoul
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
|
||||
Lexesis(5)
|
|
@ -0,0 +1,43 @@
|
|||
.\" generated with Ronn/v0.7.3
|
||||
.\" http://github.com/rtomayko/ronn/tree/0.7.3
|
||||
.
|
||||
.TH "LEXESIS" "5" "May 2016" "" ""
|
||||
.
|
||||
.SH "NAME"
|
||||
\fBLexesis\fR \- Syntax rules for Lexesis \.lxs files
|
||||
.
|
||||
.SH "DESCRIPTION"
|
||||
Input files for Lexesis(1) have a \fB\.lxs\fR extension and have a set of some very simple rules: On each line, a new type of token is specified with a different priority, starting with the highest at the top of the file and lowest at the bottom\. If your input matches more than one of the regexes in your input file, the generated lexer will choose the token with the highest priority\. The line begins with the name for the new type of token, following a \fB=\fR and finally the regex used to match tokens of that type\. If you want to add a comment to the file, make sure the line starts with a \fB#\fR and Lexesis will ignore that line\.
|
||||
.
|
||||
.P
|
||||
Consider the following example:
|
||||
.
|
||||
.IP "" 4
|
||||
.
|
||||
.nf
|
||||
|
||||
CAPITAL = [A\-Z]
|
||||
NUMBER = [0\-9]
|
||||
|
||||
# This is a comment
|
||||
ALL = [a\-zA\-Z]
|
||||
.
|
||||
.fi
|
||||
.
|
||||
.IP "" 0
|
||||
.
|
||||
.P
|
||||
Here we have 3 different tokens \fBCAPITAL\fR, \fBNUMBER\fR and \fBALL\fR\. Note that the names for the tokens only consist of capital letters, small letter and underscores, other characters are not recommended, in order to work for most possible backends\. When we run \fBA\fR through the generated lexer, it will return that it\'s a \fBCAPITAL\fR, since it is specified higher than \fBALL\fR\.
|
||||
.
|
||||
.SH "AUTHORS"
|
||||
.
|
||||
.IP "\(bu" 4
|
||||
Thomas Avé
|
||||
.
|
||||
.IP "\(bu" 4
|
||||
Robin Jadoul
|
||||
.
|
||||
.IP "" 0
|
||||
.
|
||||
.SH "SEE ALSO"
|
||||
Lexesis(1)
|
|
@ -0,0 +1,34 @@
|
|||
Lexesis(5) -- Syntax rules for Lexesis .lxs files
|
||||
=================================================
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
Input files for Lexesis(1) have a `.lxs` extension and have a set of some very simple rules:
|
||||
On each line, a new type of token is specified with a different priority, starting with the highest at the top of the file and lowest at the bottom.
|
||||
If your input matches more than one of the regexes in your input file, the generated lexer will choose the token with the highest priority.
|
||||
The line begins with the name for the new type of token, following a `=` and finally the regex used to match tokens of that type.
|
||||
If you want to add a comment to the file, make sure the line starts with a `#` and Lexesis will ignore that line.
|
||||
|
||||
Consider the following example:
|
||||
|
||||
CAPITAL = [A-Z]
|
||||
NUMBER = [0-9]
|
||||
|
||||
# This is a comment
|
||||
ALL = [a-zA-Z]
|
||||
|
||||
Here we have 3 different tokens `CAPITAL`, `NUMBER` and `ALL`.
|
||||
Note that the names for the tokens only consist of capital letters, small letter and underscores, other characters are not recommended, in order to work for most possible backends.
|
||||
When we run **A** through the generated lexer, it will return that it's a `CAPITAL`, since that is specified higher than `ALL`.
|
||||
|
||||
AUTHORS
|
||||
-------
|
||||
|
||||
* Thomas Avé
|
||||
* Robin Jadoul
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
|
||||
Lexesis(1)
|
Loading…
Reference in New Issue