Introduction to Hugin.
Hugin is a commercial software package which allows the user to represent Bayesian
networks using a relatively straightforward set of graphical tools. Hugin is also available in
a more restricted version which is freely available to non-commercial users. More details
of Hugin and related software for calculating Bayesian networks can be found in Appendix
A.
1
Notation
These are fairly standard nodes; usually a line, or bar, over any node name indicates a not
state, if the node is dichotomous. For historical reasons the ultimate question node,
H node, is an exception. For any useful Bayesian network used in the forensic sciences
there must be at least one proposition, and at least one observation. For this session only
two nodes will need to be considered, these can be notated:
• H - the ultimate question - usually takes two states:
1. Hp - The suspect is the offender.
2. Hd - Some person other than the suspect is the offender.
For the examples in this practical session the proposition shall be stated as the
question: is suspect the offender?, and will have corresponding states yes
and no. In subsequent practical sessions the states Hp and Hd will be used.
• E - the evidence - in the simplified sense in which evidence is used for these exercises
usually takes two states:
1. E - The trace found to be associated with a suspect “matches” in some sense
that from the crimescene.
2. E - The trace found associated with a suspect does not “match” that observed
at the crimescene.
2
The problem
We shall be using Hugin to conduct the calculations for a very simple Bayesian network.
This network is so simple that it uses none of the graph properties elaborated upon in the
previous lectures, and all the values can be calculated by hand.
The situation is:
• Some offence has been committed.
• An item of trace evidence has been found at the crimescene.
• The trace evidence is thought to be certainly connected to the offence.
• It is certain that the trace evidence has been left by the offender.
• A suspect has been found who has an indistinguishable trace.
• The trace evidence occurs at a rate of 1% in the population.
We can use Bayes theorem to calculate what is the probability that the suspect is the
offender. Denote E to mean that the trace from the suspect is indistinguishable from the
trace recovered from the crimescene.
In that case Bayes theorem can be stated:
Pr(Hp |E) =
Pr(Hp ) Pr(E|Hp )
Pr(Hp ) Pr(E|Hp ) + Pr(Hd ) Pr(E|Hd )
If we assume a uniform prior, that is Pr(Hp ) = Pr(Hd ) = 0.5 then by simple substitution
this can be rewritten:
Pr(Hp |E) =
=
0.5 × 1
(0.5 × 1) + (0.5 × 0.01)
0.5
0.5 + 0.005
= 0.990099
≡ 99.0099%.
Our aim in this practical session is to produce a simple Bayesian network which will
replicate this simple calculation.
2
3
A first Bayesian network
Run Hugin. A window such as this window will appear.
This is the main Hugin window. Like any other GUI type application it has a pane which
represents the current network, and is called the network pane. It is probably better to
run Hugin with the current network pane maximised to the Hugin window.
First we need to think about the nodes in the network. As we have H and E in the equation,
then eventually we shall need a node representing each of these uncertain quantities. For
the moment, however, we shall concentrate upon the H node.
Use the discreete chance tool to add a discrete chance node to the network. You
3
can place the node anywhere in the network pane.
We now need some properties for this node. To edit the node properties highlight it by
left clicking on it, then either press CTRL+enter, or select the node properties option
from the edit menu.
Here we can set all the properties for a discreete node. We shall call it H as this is going
to be the node which proposes that the suspect is the offender, or not the offender. We
set this in the node tab on this dialogue. It is useful at this point to give the node a
meaningful label. Here I have given it a label ‘‘H - is suspect the offender?’’.
This label will be shown in the network pane, and in the outputs from the network, so it
is worth the effort of giving nodes meaningful labels.
4
Some other node attributes which should be changed are using the states tab. By default
there are two states, and the buttons on the right hand side can be used to add more
states to the node. In this instance we only need to use the two states we have, however,
we could do with renaming them to yes and no, to reflect their meaning. You do this by
typing the new name in the illustrated box and pressing the change name button.
After this press the OK button.
Now that the node has been added we can add to that node a table of values. The ‘‘H is suspect the offender?’’ node we only have two states, no and yes. We wish to
associate some prior to each of these states. To do this we need to look at the table
5
associated with this node.
The tables are really the most important, and in some respects the most difficult, aspect
of a Bayesian network. The tables appear in a separate window which can be shown by
using either the toolbar button provided, or the show table window option from the
view menu. These are indicated in the figure. Selecting either of these produces
Notice how there is no table displayed in this pane. To get a table in the pane it is
necessary to select the ‘‘open tables’’ option from the view menu. Do so and a table
will appear in the table pane.
Here we can see the table in its pane. The tables are tables of probabilities where the node
is a parent node, or conditional probabilities for nodes which are child nodes. As this is a
6
node with two states, and is the only node, it’s corresponding table has only two states,
the yes and no specified earlier. At the moment each state has a number besides it. This
is the probability of the node being in that particular state. Here Pr(yes) = Pr(no) = 1
which cannot be, as the sum would be greater than one. It is better, although not
absolutely necessary, to change these to Pr(yes) = Pr(no) = 0.5, which would be correct.
Hugin will usually treat these probabilities as proportions, thus were we to leave this table
with a column of 1’s in the column the calculations would be correct in the end. To avoid
confusion it is usually better at this stage to manually make sure that the column sums
are one. To do this simply select each value and reset it to 0.5.
It is generally true to say that in any given Hugin table the columns represent states of
the node which are exclusive and exhaustive. This means that each column should have
values in it which and non-negative, and sum to one. If this is not the case then Hugin,
unless instructed to do otherwise, will rescale each value in each cell of each column to
ensure that the column sum is one.
Furthermore, note that a zero or unit (1) entry for a particular cell is perfectly acceptable,
and indeed, becomes necessary for many cells in these conditional probability tables where
a given conjunction of conditions may be impossible, or inevitable.
This may be a minimal network, comprising as it does only one node, but Hugin can still
run it. To run the network select the button on the toolbar with a representation of
lightning upon it.
This will run the network. As the network is being run the display will change completely
to Hugin’s run mode.
Initially in run mode Hugin will show a tree like pane to the left, and the network pane to
the right. To see a bar representation of the values in the states of the node click on the
root of the tree structure in the tree node. Above this has already been done, and the
7
root of the branches indicated.
Here we can see that the probabilities associated with each state of the is suspect the
offender? node are 50 and 50. Hugin will use percentages (%) for the values associated
with each state for each node. The precision with which these percentages are displayed
can be changed with the belief precision item from the view menu.
A way of telling the network that the state for any node is no longer uncertain is by setting
the value for any state of that node to 100%. To do this simply double click on the bar
corresponding to the state of the node which is to be set to 100%. The bar for that
state of the given node will turn red, its value will become 100%. As a consequence all
the other states for that node will become 0%. In the network pane the node will have a
small red indicator added to its lower right corner.
8
This is the equivalent of telling Hugin that it is now certain that the node is suspect
the offender? is yes, or no dependent on which state has been selected. In the figure
above the yes node has been selected to be set to 100%. Press the button with the
depiction of a pencil to return to Hugin’s network construction mode.
4
Extending the network
The Bayesian network constructed in Section 3 isn’t very useful in that it doesn’t calculate
anything. We now need to make this network conduct the calculations set out in Section
2.
Set Hugin back to the network construction mode and add a new node using the
discreete chance tool. Highlight this node and open the node properties tool.
Following the notation in Section 2 call the node E, and give it two states, yes and no.
You may also find it useful to give the node a label such as ‘‘E - does trace match
suspect?’’, and a width of 200.
Use the open tables option from the view menu, and the Hugin window, with both
table and network panes should look like this:
9
Use the link tool to add an edge between these two nodes. Getting the order right is
important as this is a directed graph, and that the order in which the nodes are selected
determines the direction of the edge. The order is:
1. select the link tool by pressing the toolbar button,
2. move the mouse pointer over the H - is suspect the offender? node using
the left hand mouse button,
3. drag the mouse pointer over to the E - does trace match suspect? node,
4. release the left hand mouse button.
An arrow should appear between the H - is suspect the offender? and the E does trace match suspect? nodes. The head should be pointing towards the E does trace match suspect? node.
If an edge is pointing the wrong way, or not needed, then that edge can be deleted by left
clicking on it so that it is highlighted, and pressing the delete key, or using the delete
tool on the main Hugin toolbar.
When this has been done the network should look like:
Make sure the tables for both nodes are visible in the tables pane, and select the table
corresponding to the E - does trace match suspect? node. Notice how this table
has now become a table with two rows and two columns, rather than one with a single
column and two rows as the table corresponding to the H - is suspect the offender?
node still is§ .
§
The table can have the full label displayed for its corresponding node by dragging the first column to
the right using the left hand mouse button.
10
The reason for the change in the table is that Hugin now knows that the E - does trace
match suspect? node is a child node of the H - is suspect the offender? node.
This means that all the values in the cells of the table are now conditional probabilities.
The table now has the form:
E - does trace match suspect?
E - is suspect the offender? yes
yes
1
no
1
no
1
1
These tables can be confusing unless it is borne in mind at all times precisely which cell
means what. The columns of this table correspond to the states of the parent node,
which in this case is the hypothesis, H - is suspect the offender?, node. In this
example the suspect can either be the offender, or the suspect is not the offender. The
rows of the table correspond to the states of the child node, and is the observation of E
- does trace match suspect?. The trace from the crimescene can either match, or
not-match that trace recovered from the suspect.
Hugin, by default, will compress the label E - is suspect the offender? into a space
taking up about eight characters. This makes the labels of the conditional probability
table difficult to read. You can expand the space taken up by the labels by dragging the
edge of the cells corresponding to the states to the right. This will allow more space so
the entire label can be seen. Unfortunately this may result in the final column of cells
in the conditional probability table becoming pressed up against the right hand side of
the table window. The solution is to drag the toolbar on the left hand side of the table
window to the right slightly.
Taking the cells in the table in order:
11
1. The top left cell in the table can be read as the probability of observing a match
between the trace at the crimescene, and the trace recovered from the suspect,
were the suspect truly the offender. As we know that the trace is truly associated
with the offender, and if the suspect truly is the offender, then the trace would have
to have come from the suspect. Therefore we can say with certainty that the trace
would match that recovered from the suspect. This cell is equivalent to Pr(E|Hp ).
2. The bottom left cell in the table is the probability of observing a match between
the trace at the crimescene, and the trace recovered from the suspect, were the
offender truly the suspect. As we know that the trace is truly associated with the
offender, and no person other than the offender, and the offender is the suspect,
then we know the trace came from the suspect, and no other person than the
suspect. Therefore there is a probability of zero, if the suspect is truly the offender,
that we will not observe a match in the trace evidence between the suspect and the
crimescene trace evidence. This cell is equivalent to Pr(E|Hp ).
3. The top right cell is the probability of observing a match in the trace observations
were the offender some individual other than the suspect. This is Pr(E|Hd ), and in
this case is equivalent to the probability of observing match between the crimescene
trace, and any randomly selected individual, and, from Section 2, is 1%, or 0.01.
4. The final cell in the bottom right of the table is the probability of not observing a
match between the crimescene trace, and some trace recovered from any randomly
selected individual. This is Pr(E|Hd ), and is 99%, or 0.99, as if there is a 1%
probability of observing a match in the trace evidence from any randomly selected
individual, there must be a 99% probability that a match will not be observed.
The table below summarises the meaning of the cells in the table in the notation used in
Section 2:
Pr(E|H)
E
H
E
E
Hp
Hd
Pr(E|Hp ) Pr(E|Hd )
Pr(E|Hp ) Pr(E|Hd )
Entering these values into their respective cells in the table, and running the model, having
set the value of the E - does trace match suspect? node to yes, as in Section 3 we
get¶ :
See here, in the left hand pane when the E - does trace match suspect? is set
to yes, the H - is suspect the offender? has the values for its states change to
99.0099 for yes, and 0.9901 for no.
¶
You may have to change the belief precision item from the view menu to 4 or greater to see
sufficient decimal places.
12
This is exactly the value calculated for the yes state in Section 2. The value for no, using
the same prior, can be confirmed as:
Pr(Hd |E) =
Pr(Hd ) Pr(E|Hd )
Pr(Hp ) Pr(E|Hp ) + Pr(Hd ) Pr(E|Hd )
Pr(Hp |E) =
0.5 × 0.01
(0.5 × 1) + (0.5 × 0.01)
=
0.005
0.5 + 0.005
= 0.00990099
≡ 0.990099%.
5
Likelihood ratios
A likelihood ratio is considered a measure of “evidential strength” (Aitken & Taroini;
2004). It can be given as:
LR =
Pr(E|Hp )
Pr(E|Hd )
where LR is the likelihood ratio. Using Hugin there are two equivalent ways of calculating
a likelihood ratio given any particular network.
13
5.1
LR calculated by instantiating E
In Section 4 we saw that the Hugin table associated with a child node was a table of
conditional probabilities. In the above instance we can see directly how to calculate this.
In the example Pr(E|Hp ) = 1, and Pr(E|Hd ) = 0.01, thus:
LR =
1
Pr(E|Hp )
=
= 100.
Pr(E|Hd )
0.01
However, in the instance where the prior is uniform, and the states of the posterior
dichotomous, then it can be shown that:
LRπ ≡
Pr(Hp |E)
Pr(Hd |E)
where LRπ indicates a likelihood ratio under the rather stringent conditions of uniform
prior and dichotomous outcome.
So by instantiatingk the evidence node(s) we can set the network up, by inputting to the
network the known states for the evidence nodes, to calculate the ratio of the posterior
Hp to Hd , which is then equal to LRπ . From Section 4, Pr(Hp |E) = 99.0099%, and
Pr(Hd |E) = 0.990099%, so:
LRπ ≡
Pr(Hp |E)
99.0099
=
= 100
Pr(Hd |E)
0.990099
and is equal in value to the likelihood ratio calculated from the conditional probability
table.
5.2
LR calculated by instantiating H
An equivalent method is to look at the value of Pr(E|Hp ) and Pr(E|Hd ) by first instantiating the H node to Hp then reading off the value of Pr(E|Hp ) from the E node. In the
case above when H is set to Hp Pr(E|Hp ) = 1.00. When H is set to Hd Pr(E|Hd ) = 0.01.
The likelihood ratio can then be calculated as 1/0.01 = 100, exactly the same value as
above.
These two equivalent procedures give us a pair of methods by which we can calculate
likelihood ratios for any whole network, even those networks which are complicated enough
for us not to be able to easily calculate a likelihood ratio from the data directly. Which
of the above to use is entirely a matter for personal preference. These notes shall use
the former method as somehow it seems more intuitive to feed into a network those
observations which are known.
k
Running the network and making Pr(N) = 1, where N indicates any node.
14
6
Files created by Hugin
The network you have created can be saved as a file in the same way as any other
application can save its data. Hugin is a bit particular in that it will not allow you to use
hyphens in the filename, but otherwise everything is fairly standard.
The suffix for a Hugin network file is .oobn, and when a network is saved the particular
state of the network will be discarded. So if you run a network, then save it and close
the file, upon opening that file you will see the network construction pane, not any of the
other panes, and no node which has been set to 100% will still be set to 100%.
Another file which is created as Hugin runs a network is a .hlg file. This is a file of
information calculated by Hugin at run time, and is ASCII, so can be inspected by any
text editor. These files can be safely deleted.
7
Hugin hints and tips
1. Hugin places the levels of its conditional probability tables in order of the edge
placement. Take the following three node network:
H
C2
C1
Here we have nodes H, C1 and C2. Node C1 is a child of nodes C2 and H, and
is therefore a conditional probability. As each of the three nodes have two states,
C2={yes, no}, C1={State 1, State 2}, and H={Hp , Hd }, then C1 has 23 = 8
different states.
If the C2 → C1 edge is drawn before the H → C1 edge, then the table will look
like:
C2
H
C1
State 1
State 2
Hp
yes no
1
1
1
1
15
Hd
yes no
1
1
1
1
where the top level of nodes is the H node and the second level is the C1 node. If
the order by which the edges are places is reversed, that is the H → C1 edge placed
before the C2 → C1 edge, then the ordering of the table is reversed, and the table
will be like:
C2
H
C1
State 1
State 2
yes
Hp Hp
1
1
1
1
no
Hd Hd
1
1
1
1
These two tables are functionally identical, and whichever ordering is selected is a
matter of convienience, or personal preferance.
2. Do not change the file name of any Hugin .oobn file. For Hugin to load a file the
file name must be the same as the class of object which is being loaded by Hugin.
This class name is the same as the file name when Hugin writes the file, so if you
change the file name the class name, unless you edit the file, will not be the same
as the filename, and Hugin will return an error, and refuse to load the file.
If you find an error with a file which you know to be a legitimate Hugin file, and the
file simply refuses to load, open the file in a text editor and check the classname,
which is on the first line of the file, is the same as the file name. If not either change
the class name in the file, or the filename.
8
References
Aitken, C. & Taroni, F. (2004) Statistics and the Evaluation of Evidence for Forensic
Scientists. Wiley.
Gustafson, G. (1950) Age determination on teeth. Journal of the American Dental Association, 41, 45-54.
Taroni, F. Aitken, C. Garbolino, A. & Biedermann, A. (2006) Bayesian Networks and
Probabilistic Inference in Forensic Science. Wiley.
9
Exercise
From data in Gustafson (1950) the following table of frequencies for occlusal attrition
and age group can be constructed:
16
age group
<30
31-50
51+
attrition score
1 2
3
1 0
0
4 9
2
0 10
5
Assume this is sample is a reasonable sample of the population of age groups and occlusal
attrition scores. Use Hugin to construct an appropriate Bayesian network and use that
network to answer the following:
1. What is the posterior probability of an individual who has been observed to have an
occlusal attrition score of 2 being aged between 30 and 50 years? Assume a uniform
prior for age.
2. What is the posterior distribution for age for an individual with occlusal attrition
score of 1? Assume a uniform prior for age.
3. What is the posterior distribution for occlusal attrition score for individuals in the
31-50 year age category? It is important to use a prior distribution based upon the
data for Pr(Oj ) = {15/41, 19/41, 7/41} for hand calculation for this question.
Confirm each of your answers by hand calculation.
17
10
Solution to exercise
We have been given the following data:
age group
<30
31-50
51+
attrition score
0 1
2
1 0
0
4 9
2
0 10
5
and have been instructed to construct a suitable Bayesian network to make some inferences based upon the relationship between age and occlusal attrition seen in the table.
The Bayesian network is very similar to the network constructed in Section 4, except that
each node will have three states. The parent node will be the age node, and the child
node the occlusal attrition score node. The link should run from the age node to
the occlusal attrition score node, and look like:
A − age
O − occlusal attrition score
Denote age as A with categories i, and occlusal attrition as O, with categories j. The table
associated with the occlusal attrition score is the probability of occlusal attrition
score given age, or Pr(Oj |Ai ). From the data this is easily calculated by dividing by the
row totals as:
Pr(Oj |Ai )
attrition score
age group
0
1
2
<30
1
0
0
31-50
4/15 9/15 2/15
51+
0
2/3
1/3
Having constructed this relatively simple Bayesian network we can then go on to answer
the enumerated questions.
18
1. What is the probability of an individual who has been observed to have an occlusal
attrition score of 2 being aged between 30 and 50 years? Assume a uniform prior.
Using the network the probability, assuming uniform priors, and setting the occlusal
attrition score node to 2, can be read off as 28.57%.
The probability we want is Pr(A31−50 |O2 ), by Bayes theorem:
Pr(A31−50 |O2 ) =
=
Pr(A31−50 ) Pr(O2 |A31−50 )
P
i Pr(Ai Pr(Oj |Ai ))
Pr(A31−50 ) Pr(O2 |A31−50 )
[Pr(A<30 ) Pr(O2 |A<30 )] + [Pr(A31−50 ) Pr(O2 |A31−50 )] + [Pr(A50+ ) Pr(O2 |A50+ )]
substituting in a uniform prior, and values from the table of Pr(Oj |Ai ):
Pr(A31−50 |O2 ) =
=
1/3 × 2/15
[1/3 × 0] + [1/3 × 2/15] + [1/3 × 1/3]
2/45
2/45 + 1/9
= 0.2857
which agrees with the value calculated by Hugin.
2. What is the posterior distribution for age for an individual with occlusal attrition
score of 1? Assume a uniform prior.
Setting the value of the occlusal attrition score node to 1 in the network, the
distribution can be read off as: Pr(A<30 |O1 ) = 0, Pr(A31−50 |O1 ) = 47.37%, and
Pr(A50+ |O1 ) = 52.63%.
Calculating these values we can precede as before:
Pr(Ai |O1 ) =
=
Pr(Ai ) Pr(O1 |A31−50 )
P
i Pr(Ai Pr(Oj |Ai ))
Pr(Ai ) Pr(O1 |Ai )
[Pr(A<30 ) Pr(O1 |A<30 )] + [Pr(A31−50 ) Pr(O1 |A31−50 )] + [Pr(A50+ ) Pr(O1 |A50+ )]
As Pr(O1 |A<30 ) = 0 we can dispense with calculating any term with Ai < 30 in it.
For age, Ai = 30 − 50:
19
Pr(A30−50 |O1 ) =
Pr(A30−50 ) Pr(O1 |A30−50 )
[Pr(A30−50 ) Pr(O1 |A30−50 )] + [Pr(A50+ ) Pr(O1 |A50+ )]
=
1/3 × 9/15
(1/3 × 9/15) + (1/3 × 2/3)
=
9/45
9/45 + 2/9
= 0.47368 ≈ 47.37%
and for age Ai = 50+:
Pr(A50+ |O1 ) =
Pr(A50+ ) Pr(O1 |A50+ )
[Pr(A30−50 ) Pr(O1 |A30−50 )] + [Pr(A50+ ) Pr(O1 |A50+ )]
=
1/3 × 2/3
(1/3 × 9/15) + (1/3 × 2/3)
=
2/9
9/45 + 2/9
= 0.52631 ≈ 52.63%
which is in agreement with the Hugin output.
3. What is the posterior distribution for occlusal attrition score for individuals in the
31-50 year age category? It is important to use a prior distribution based upon the
data for Pr(Oj ) = {15/41, 19/41, 7/41}.
By setting the value of the age node to the 31-50 age category the posterior distribution for occlusal attrition, given the data prior, can be read off as: Pr(O0 |A31−50 ) =
26.67%, Pr(O1 |A31−50 ) = 60.00%, and Pr(O2 |A31−50 ) = 13.33%.
Using Bayes theorem:
Pr(Oj |A31−50 ) =
=
Pr(Oj ) Pr(Ai |Oj )
j Pr(Oj ) Pr(Ai |Oj )
P
Pr(Oj ) Pr(A31−50 |Oj )
[Pr(O0 ) Pr(A31−50 |O0 )] + [Pr(O1 ) Pr(A31−50 |O1 )] + [Pr(O2 ) Pr(A31−50 |O2 )]
Here we need Pr(Ai |Oj . Again this is easily calculated by dividing by the column
totals in the table of data:
20
Pr(Oj |Ai )
attrition score
age group
1
2
3
<30
11/14
0
0
31-50
4/14
9/19 2/7
51+
0
10/19 5/7
As Pr(O0 ) = 15/41, Pr(O1 ) = 19/41, and Pr(O0 ) = 7/41, the posterior probability
for O0 can be calculated:
Pr(O0 ) =
15/41 × 4/15
(15/41 × 4/15) + (19/41 × 9/19) + (7/41 × 2/7)
= 0.2666 ≈ 26.67%
and for O1 :
Pr(O1 ) =
19/41 × 9/19
(15/41 × 4/15) + (19/41 × 9/19) + (7/41 × 2/7)
= 0.60 ≈ 60%
and finally for O2 :
Pr(O2 ) =
7/41 × 2/7
(15/41 × 4/15) + (19/41 × 9/19) + (7/41 × 2/7)
= 0.13333 ≈ 13.33%
which all agree with the calculations given by Hugin.
21
A
Bayesian network software
There are several packages available which implement Bayesian networks. They fall into
two camps, those which are self contained, and those which are libraries, which are intended for you to write your own interfaces to. If you are new to Bayesian networks then
you should probably stick to the self contained ones in the first instance.
1. self-contained
• Hugin is a windowed commercial package which handles Bayesian networks. It
is available for Windows, OSX, Solaris and GNU/Linux, and a non-commercial
version with limited features is available free of charge. The site for Hugin is:
http://www.Hugin.com/, and it is the package used for this course following
Taroini et al. 2006.
• GENIE is an open source self-contained Bayesian network package similar to
Hugin. It can be downloaded from http://genie.sis.pitt.edu/.
2. libraries
• SMILE is a series of C++ libraries implementing Bayesian networks, random
forests, structural equation models and the like. It can be downloaded from
http://genie.sis.pitt.edu/.
• The following packages (and more) can be found for R:
(a) http://cran.r-project.org/web/packages/G1DBN/index.html
(b) http://cran.r-project.org/web/packages/gRain/index.html
(c) http://cran.r-project.org/web/packages/deal/index.html
These are all open source.
B
Files supporting this document
1. BayesCalculation.oobn - simple network which calculates a Bayesian posterior
for a binary match type problem. Instantiate the E to “yes”.
2. AgeEstimationExercise.oobn - another simple network which calculates a Bayesian
posterior for age based upon dental observations. Instantiate the E to “yes”.
22
© Copyright 2026 Paperzz