Discourse Processing of Operative Notes

The Role of Semantic and Discourse Information in
Learning the Structure of Surgical Procedures
Ramon Maldonado, Travis Goodwin, Sanda M. Harabagiu
The University of Texas at Dallas
Human Language Technology Research Institute
http://www.hlt.utdallas.edu
Michael A. Skinner
UT Southwestern Medical Center
Childrens Medical Center of Dallas
Human Language Technology
Research Institute
1.
2.
3.
4.
5.
6.
7.
Outline
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
Human Language Technology
Research Institute
The Problem
Electronic Operative Notes are generated after surgical procedures
for documentation and billing. These operative notes, like many
other Electronic Medical Records (EMRs) have the potential of an
important secondary use: they can enable surgical clinical research
aimed to improving evidence-based medical practice.
In particular, if we had the tools and techniques to automatically
process such surgical operative notes, we could extract evidence
about the techniques used in each step of the procedure, the
observations made during the operations, and perhaps most
importantly, the management solutions devised by the surgeon in the
face of unexpected or unusual situations.
Human Language Technology
Research Institute
Semantic Processing
Understanding surgical techniques through the sequence of
actions and observations made by surgeons is not possible
without semantic parsing:
 to be able to reliably extract the predicates that map to the
same actions and observations across multiple operative notes
that describe the same type of surgical procedure.
 to address this problem, semantic predicates and their
arguments need to be automatically identified in operative
notes.
Under general anesthesia, the patient was sterilely prepped and draped.
condition
PREDICATE
PREDICATE
condition
anesthesia
ARGm-adj
PREDICATE
prep
drape
ARG2(prep)
ARGm-adv(prep)
the patient ARG1(cover)
general
ARGm-adv(cover)
sterilely
A 1-centimeter vertical incision was made through the skin and fascia of the umbilicus with a 15 blade.
PREDICATE
condition
make incision
ARG-Size
ARG-Direction
1-centimeter
vertical
ARG-Path
the skin
ARG-Path
the fascia of the umbilicus
ARG-Instrument
15 blade
Blunt dissection was used to enter the abdomen through this incision.
PREDICATE
ARGm-adv
dissection
goal
enter
ARG1-LOC(enter)
blunt
Entity Co-Reference
PREDICATE
ARG-Means(Path_Shape)
the abdomen
Event Co-Reference(result)
this incision
A 10-millimeter trocar was placed through the incision into the abdomen and connected to CO2 gas to
create a pneumoperitoneum.
PREDICATE
PREDICATE
PREDICATE
ARG-Path
place
the incision
ARG1(place)
ARGm-size
10-millimiter
ARG1(connect)
trocar
connect
ARG2(connect)
goal
CO2 gas
create pneumoperitoneum
Human Language Technology
Research Institute
Clinical Text Processing
We claim that in order to analyze surgical procedures we
need to consider both semantic and discourse information
to:
a) first discover which actions and observations
correspond to each surgical step; and
b) semantically align the predicate argument structures to
be able to cover all their arguments and to capture
both their semantic variability and discern their
paraphrases.
Human Language Technology
Research Institute
1.
2.
3.
4.
5.
6.
7.
Outline
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
The corpus: Two types of Surgeries
Human Language Technology
Research Institute
We performed semantic and discourse analysis on a corpus of 3,546
operative notes, provided by a pediatric surgeon from The Childrens
Hospital Medical Center in Dallas, TX. The notes document two common
types of pediatric operations, namely appendectomy and humerus
repair.
 An appendectomy is the surgical removal of the vermiform appendix. This procedure is
normally performed as an emergency procedure, when the patient is suffering from
acute appendicitis.
 The humerus repair surgery is performed to repair the fracture of the humerus. The
humerus is the only bone in the upper arm, running from shoulder to elbow. It is
commonly fractured, often by sports injuries, accidents or falls.
Human Language Technology
Research Institute
More about the data
 In our data we had 2,816 appendectomy notes and 730 humerus
repair notes. The appendectomy notes were written by 12 different
surgeons and the humerus repair notes were generated by 14
different surgeons.
 All notes were de-identified to protect the privacy of the patients.
The study was performed under an IRB exemption granted by the
Institutional Review Board of the University of Texas Southwestern
Medical Center.
Surgical Steps
Human Language Technology
Research Institute
When performing a surgery, a sequence of surgical steps are typically followed.
Surgical steps for (a) appendectomy and (b) humerus fracture repair
Surgery
Steps
Typical
actions
STEP 1
Patient Introduction
Bring patient in OR
Place patient on table
STEP 2
STEP 3
STEP 4
STEP 5
STEP 6
Prepping the Patient
Entering the
abdomen
Division of the
appendix
Clean up &
Closure
Post-surgical
Processing
Perform anesthesia
Prep and drape
Perform time-out
Administer pre-op
antibiotics
Insert a catheter
Make incision
Place trocar
Place port
Insufflation
Insert camera
Get to the appendix
Grab appendix
Divide appendix
Bag appendix
Remove appendix
Remove instruments
Desufflation
Irrigation
Close wound
Dress wound
Extubation
Awaken patient
Transfer patient
out of the OR
(a)
Surgery
Steps
Typical
actions
STEP 1
Patient Introduction
Bring patient in OR
Place patient on table
STEP 2
Prepping the Patient
Perform anesthesia
Prep and drape
Perform time-out
Administer pre-op
antibiotics
STEP 3
STEP 4
STEP 5
STEP 6
Reducing the
fracture
Pinning the
fracture
Bending
the pins
Applying Splint
Perform a
reduction
procedure
Insert pins
Insert wires
Stabilize the
fracture
Bend pins
Cut pins
Pad with
felt
(b)
Place patient
in a splint
STEP 7
Post-surgical
Processing
Extubation
Transfer patient to
recovery room
Awaken patient
Human Language Technology
Research Institute
Semantic Constraints
In processing operative notes, we were interested in capturing actions and
observations similar to the ones listed in the description of the surgical steps,
but also those surgical maneuvers involving actions not typical of any of the
steps, such findings may suggest an unexpected phenomenon or complication.
 To generate automatic processing techniques that capture the semantics and discourse
of the operative notes we constrained the predicate-argument structures to involve only
information about the patient, anatomical locations, surgical instruments and surgical
supplies, including the operative table and the operative room.
A predication was included in our analysis if it was coordinated syntactically
with another predication which was considered of interest based on the
constraints presented above.
This assumption allowed us to produce semantic information
of 114,422 verbs and 22,458 nominals, out of which 108,332
and 14,925 were included due to syntactic co-ordination.
Human Language Technology
Research Institute
1.
2.
3.
4.
5.
6.
7.
Outline
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
Predicate Argument Semantics for
Surgeries
Human Language Technology
Research Institute
(a)
sentence and its dependency parse;
(b)
the semantic parse of the same sentence, with predicates highlighted and argument roles specified;
(c)
lexical parsing and normalization;
(d)
semantic parsing customized for the surgical domain.
(a)
ROOT
NMOD
<root>
An
NMOD
infraumbilical
COORD
SBJ
incision
VC
was
P
made
NMOD
, and
Arg1
an
DT
infraumbilical
JJ
incision
NN
be
VBD
P
NMOD
the
peritoneal
SBJ
cavity
was
,
,
entered
Arg1
(b)
make
VBN
VC
and
CC
the
DT
peritoneal
JJ
cavity
NN
be
VBD
enter
VBN
(c)
infraumbilical
make incision
peritoneal cavity
Argm-LOC
Argm-LOC
(d)
enter
.
.
.
Joint Learning of Syntactic
and Semantic Parsing
Operative Note
Tokenization
Part-of-Speech
Tagging
Syntactic Dependency Parse
Dependency
Parsing
PREDICATE
Identification
Semantic Parse
PREDICATE
Disambiguation
PREDICATE
schedule
Arg2-PRD
ARGUMENT
Identification
ARGUMENT
Classification
Arg3-TMP
today
PREDICATE
hearing
Arg1
the issue
14
Human Language Technology
Research Institute
Semantic Context of predicatearguments structures for Surgery
Each predicate argument structure (PAS) for surgery has its own semantic context which is represented by a
“window of PASs” of size C centered on PASi, representing the C/2 PASs identified in an operative note before
PASi as well as the C/2 PASs identified after PASi.

Given a predicate-argument structure (PAS) of a surgical action or surgical observation, we learned a highdimensional vector representation of the PAS that can be used to predict its semantic context.
Learning such representations is important because they enable us to identify the same surgical action or
observation that is reported in different operative notes, even when using different words or expressions.

We hypothesize that the same predications used to document the same type of surgical procedure have very
similar semantic contexts. Thus, knowing the context of a predication, we could predict the most likely
surgical action or observation which can be performed by maximizing the average log probability of the PASs
from the semantic context:
S
1
log p ( PASs+ j | PAS j )
å
å
S s=1 -C£ j£C; j¹0
Human Language Technology
Research Institute
Learning Embeddings of Predicatearguments Structures for Surgeries
 Using the Skip-gram model
PASk-C
PASk-1
PASk+1
PASk+C
softmax
M’
Embedding of PASk
M
PASk
Sliding window (size = 2C+1)
PAS1 PAS2 PAS3 … PASk-C-1 PASk-C … PASk-1 PASk PASk+1 PASk+2 … PASk+C PASk+C+1 …. PASP
Human Language Technology
Research Institute
Learning of the Embeddings
Learning of the PAS embeddings in the neural network architecture represented
in the previous Figure takes place in two phases: the feed-forward process and
the back-propagation process.
1. In the feed-forward process, the input PAS is first mapped into its
embedding vector by the weight matrix M. After that, the embedding
vector is mapped back into the 1-of-S space by another weight matrix M’
and the resulting vector is used to predict the surrounding PASs using the
softmax function.
2. As training examples are used, the errors from the prediction of the PAS
context are computed and the prediction errors are propagated back to
update the neural network in the backpropagation process. This leads to
updates to the M and M’ matrices.
When the training process converges, the weight matrix M is regarded as the
learned PAS representations in a multi-dimensional space.
Human Language Technology
Research Institute
1.
2.
3.
4.
5.
6.
7.
Outline
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
Human Language Technology
Research Institute
Discourse Processing
To automatically capture the discourse of the corpus, we have considered two
methods.
I.
The first one is based on the assumption that predicate argument structure
embeddings model a semantic context, which can be seen as a portion of
the description of a surgical step. When similar embeddings are clustered,
they provide a semantic representation of the surgical step, as it emerges
from all the operative notes.
II.
The second method considers that each operative note provides a separate
discourse that can be automatically segmented to account for the
description of each surgical step.
Finally, we combined the strengths of each method using the expectationmaximization algorithm to (a) change the clusters of embeddings based on
evidence of the discourse segments; and (b) correct the discourse segments based
on the content of the updated clusters of predicate embeddings.
Human Language Technology
Research Institute
Clustering
Predicate Embeddings
The predicate-argument structures (PAS) embeddings are vectors of real numbers that
can be clustered.
A variety of clustering methods can be used, but the most appealing one is the K-Means
clustering method.
Given that for each surgery type we know the numbers of surgical steps (we have 6 steps
for appendectomies and 7 steps for humerus fracture repairs), a flat clustering method
such as K-Means can be used, as the number of resulting clusters (K = number of
surgical steps) is known.
The vector representation of the embeddings enables the computation of the distance
between embeddings by using the cosine similarity metric, introduced in Information
Retrieval vector models.
Automatic Discourse Segmentation
Human Language Technology
Research Institute
The central idea is to consider the structure of an operative note as a
function of the connectivity patterns of the clinical terms that comprise
it. We use the TextTiling Algorithm.
The TextTiling algorithm consists of three steps performed after sentence
boundaries are identified:
å w ´w
å
å
t,b2
term tokenization;
t t,b1
sim(b1, b2 ) =
lexical score determination and
2
2
w
´
w
(3)
segment boundary identification.
t t,b1
t t,b2
The automatic identification of the PASs in the operative notes have already determined the
sentence boundaries and performed tokenization. To compute lexical scores, the notes are
first divided into blocks which consist of several token sequences. Each token sequence has a
length of 20, while the blocks consist of 6 such sequences.
Each token of a block receives a weight wt;b computed as the frequency of the token in the
block. These weights enable the computation of the similarity between blocks of the
operative note:
(1)
(2)
Human Language Technology
Research Institute
Learning to Identify Surgery Steps
in Each Operative Note
Ideally, we would like that each vector representation of
a predicate-argument structure that was assigned to a cluster Cli by the
K-Means algorithm would represent one of the surgical actions or
observations performed during step i of the operation.
We learn using the EM algorithm the semantic representations of the
surgical steps which consist of clusters of predicate argument structure
(PAS) embeddings pertaining to the same surgical step.
Details of Learning
Human Language Technology
Research Institute
The PASs were identified automatically from the corpus of operative notes, generating a
vocabulary V of PASs.To be able to learn the optimal assignment of a PAS from V into
one of the clusters from CL, we define a likelihood function L that uses as arguments R,
the corpus, which is observable, as well as a mapping function Z which assigns each PAS
to a certain cluster:
L(Z, R) = Õ
Õ p(Z )
i, j
(1)
ri ÎR PAS ij Îri
where Zi;j is the result of the assignment of PASi;j from report ri to one of the clusters from CL,
which we wanted to learn. The log of the likelihood (log likelihood) is therefore:
log L(Z, R) = å
å
ri ÎR PAS ij Îri
log p ( Zi, j )
(2)
Human Language Technology
Research Institute
The Steps of EM
Type equation here.We used the EM algorithm iteratively to maximize the expected log
likelihood of the joint assignment Z, given R.
The E-Step of EM finds the expected value of the log likelihood:
E éëlog L(Z(t ), R)ùû = å å log p PASij | Zi, j + log p ( Zi, j )
ri ÎR
PAS ij Îri
(
)
(3)
To compute the expected value of the log likelihood, we estimated p(PASij |Zi;j) as the ratio of how often PASij was
assigned to the cluster indicated by Zi;j devided by the cardinality of the cluster indicated by Zi;j :
PMLE ( PAS ij | Zi, j ) =
# of PAS ij assigned by Zi, j
and we estimated P(Zi;j) by its normalized cardinality:
PMLE Z i , j  
cluster indicated by Zi, j
clusterindicated byZ i , j
i 1 Cli
k
(4)
(5)
The M-Step maximizes (3) over Z, thus updating the cluster assignments.
Z ( t )  argmax log LZ' , R 
Z'
(6)
Human Language Technology
Research Institute
Discourse Constraints
When performing the maximization from the step M of the EM
algorithm we have used the constraint
that i;j Zi;j ≤ Zi;j+1, signifying the requirement that
no PASij+1 can be assigned to a cluster Clq if PASij was
assigned to a cluster Clp where p > q. This constraint maintains the
sequential integrity of the surgical steps represented by the clusters
of embeddings.
Z(t ) = argmax ( log L ( Z', R))
Z'
(6)
Learning the discourse/temporal order
of clusters
Human Language Technology
Research Institute
To infer the order between clusters, we took into account the
observation that each cluster contains a representation of
multiple PASs, and each of these PASs also occurred across
multiple notes r , and in each note they occurred in various
segments S .
 For each PAS we were able to compute an average segment
order number n based on the segmentation information.
Because each PAS was assigned to a unique cluster by K-means,
we inferred the order number of cluster Clq as the average of the
number n for each PAS assigned to Cl
i
il
a
a
a
a
q
Human Language Technology
Research Institute
Learning segmentation of surgical
notes
After inferring the order of the clusters in CL, we also linked each segment Sil
from any operative note ri to a cluster from CL by selecting the cluster where
most PASs from Sil were assigned.
Next, the initial assignments Z(0) required by the EM algorithm are possible
because each PAS from any segment Sil receives the same cluster assignment.
 To generate better surgical note segments we take into account the fact that :
A.
after the EM algorithm has converged, each PAS from a note has its own
Cluster assignment Z with 1 ≤ Z ≤ k.
B.
C.
Moreover, any PASia which precedes a PASib in an operative note ri must have an
assignment Zi;a ≤ Zi;b.
We group all PASi with the same cluster assignment into a new
segment.
Human Language Technology
Research Institute
1.
2.
3.
4.
5.
6.
7.
Outline
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
Human Language Technology
Research Institute
Structure of Surgeries
The Idea: it is essential to:
a) identify all the ways in which the same surgical action or
observation is expressed in any of the operative notes; and
computing a Multiple Sequence Alignment (MSA) of all the sequences of surgical actions or
observations reported in the surgical notes,
b)
convert this knowledge into a temporal script graph to
learn in an unsupervised way the structure of surgical
procedures
Converting the MSA into a graph by taking into account the semantic and discourse
Information processed with the EM algorithm
Human Language Technology
Research Institute
Multiple Sequence Alignments
An MSA algorithm uses as input some sequences
S1… Sn  Σ ∗ over an alphabet Σ along with:
 a cost function Cs : Σ  Σ  ℝ for substitutions and
 a gap cost CGAP  ℝ for insertion or deletion.
 Our Approach
Given the set of N surgical notes R, an MSA of R is a matrix A in which the
j-th column of A represents the sequence Sj containing PASs identified in
note rj  R, possibly with some gaps G intersected between the PASs of ri
such that each row of A contains at least one non-gap.
If a row of A, Ar contains two non-gaps, we consider
those PASs aligned; while aligning a non-gap with a gap is
interpreted as an insertion or a deletion.
Human Language Technology
Research Institute
Computing the cost of MSA
cos 𝐸 𝑃𝐴𝑆𝑗𝑖 , 𝐸 𝑃𝐴𝑆𝑗𝑘
𝐶𝑜𝑠𝑡 𝐴 =
𝑗∈𝐴𝑟 𝑖=1 𝑘=1
𝑃𝐴𝑆𝑗𝑖 𝑃𝐴𝑆𝑗𝑘
where E(PASij) refers to the embedding vector of PASij .
 Because the range of the cos similarity function for vectors is [-1;
1], we chose a gap cost CGAP of 0.
 To calculate the lowest cost MSA, we used the polynomial time
algorithm for pairwise alignment reported in [Needleman and
Wunsch, 1970] and recursively aligned these pairwise
alignments, considering each alignment as a single sequence
whose elements are pairs as in [Higgins and Sharp, 1988].
Human Language Technology
Research Institute
Building Surgical Script Structures
The Idea: Induce a temporal script graph to represent the structure of
surgical procedures.
A temporal script graph consists of nodes, representing a set of
related (or paraphrased) surgical actions or observations, and edges
between nodes, which represent a possible temporal evolution of the
surgical procedure, as induced from the operative notes.
The generation of the graph consists of three steps:
 Step 1: initialize the nodes;
 Step 2: decide which nodes should be connected;
 Step 3: simplify the graph by merging similar nodes.
Human Language Technology
Research Institute
Our Implementation
 In Step 1, we started by considering each row of the matrix A (representing the MSA
of the corpus R), and using only those PASs having sufficient frequency in the corpus
(> 50).
 In Step 2, considering that the MSA also induces a temporal ordering, we allowed
edges between the nodes Ni and Nj if i < j. In addition, we used two constraints:
o The first constraint is for all PASa  Nj , there must be a PASb  Ni that precedes PASa in
at least one note from R. Because each PAS was assigned to a cluster by the EM
algorithm, we compute for each Node Ni the average cluster number (ACN) by taking
into account all PASs aligned into Ni.
o For i < j, to connect a node Ni to Nj , the second constraint we imposed is :
0≤
ACN(Nj) - ACN(Ni) ≤ 1.
 In Step 3, we merged similar nodes by considering the centroid vector of all the
embeddings of PASs from the same node. We merged two nodes if the cosine
similarity between their centroids was greater than 0.8. We considered merging nodes
Ni and Nj only when :
i.
ii.
there was an edge between them in the graph; and
0 ≤ ACN(Nj) - ACN(Ni) ≤ 1.
Node 1.1.
Obtain consent parent
Bring patient awake op room
Node 3.1
Make incision
Use blade incision
Create incision skin
Infraumbilical incision
with Veres needle in
peritoneal cavity
Node 1.2.
Place patient table
supine position
Node 3.2.
Enter abdomen
Open fascia
Confirm Veres needle
in good position with
positive water drop test
Blunt dissection
Insufflate abdomen
Node 2.
Place (Foley) catheter
Drape abdomen
Prep abdomen solution
Administer antibiotics
Induce anesthesia
Node 4.1.
Identify appendix
Visualize appendix
Confirm diagnosis
Note appendix inflamed
Note appendix perforated
Node 4.2.
Mobilize appendix
Grasp appendix
Staple appendix base
Divide appendix
Separate appendix tissue
Node 5.1.
Take appendix from abdomen
using Endocatch bag
Remove appendix from abdomen
Place appendix bag
Place bag port
Node 3.3.
Place port
Place trocar camera
Introduce camera
Node 6.1
Irrigate area
Irrigate abdomen
Send analysis appendix
pathology
Node 6.2.
Note hemostatic line intact
Inspect hemostatic line
Note hemostasis
Inspect abdomen pathology
Node 5.2
Retrieve appendix bag
Node 7.1
Remove port visualization
Remove trocar
Node 7.2.
Close fascia suture
Close fascia Vicryl
Desufflate abdomen
Close skin suture
Infiltrate Marcaine skin incisions
Node 7.3.
Dress wound
Place dressing
Cleanse wound
Node 8.
Extubate patient
Awake patient
Tolerate well patient
procedure
Outline
1.
2.
3.
4.
5.
6.
7.
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
Human Language Technology
Research Institute
Evaluation Setup
In order to evaluate the individual and combined impact of both semantic and
discourse information, we considered the quality of clusters (and thus surgical
structures) discovered with and without semantic information and with and
without discourse information.
 When evaluating the role of semantic information, we contrasted the
performance of our Pred2Vec approach against a baseline approach in which the
vector representation of a PAS is simply a linear interpolation of the individual
word embedding vectors learned by Word2Vec.
 When evaluating the role of discourse information, we contrasted the quality of
clusters refined from discourse segmentation using the EM algorithm against a
straightforward k-means implementation .
We measured the quality of learned clusters for four configurations:
(1) using syntactic information without discourse processing,
(2) using syntactic information with discourse processing,
(3) using semantic information without discourse processing, and
(4) using semantic information with discourse processing.
Human Language Technology
Research Institute
Results
Three measures of cluster quality:
i.
the Davies-Bouldin index, which estimates the average
similarity between each cluster and its most similar
neighbor;
ii. the Dunn index, the ratio between the minimal intercluster distance and maximal intracluster distance; and
iii. the Silhouette coefficient, which contrasts the average
distance between PASs assigned to the same cluster against
the average distance to PASs assigned to other clusters
Discussion
Human Language Technology
Research Institute
Semantic
Discourse
DBI
DI
SC
(1)
No
No
1.580
0.607
-0.640
(2)
No
Yes
1.443
0.560
-0.358
(3)
Yes
No
1.067
1.009
-0.736
(4)
Yes
Yes
0.573
1.451
-0.612
Clearly, the best performance was achieved using both semantic and discourse processing (approach (4)).
Without access to discourse information, the Dunn and Davies-Bouldin indices drop by 30.5% and 46.3%,
respectively. This highlights the impact of not only discourse information, but also of the power of our EMbased approach. Interestingly, using discourse information without semantic information achieves
the third best Dunn and Davies-Bouldin indices, but achieves the best Silhouette coefficient (-0.358). This
suggests that while Pred2Vec provides significantly improved cluster quality, further research is needed to
determine the optimal number of clusters when Pred2Vec is used because Pred2Vec reduces the sparsity in
the data. Unsurprisingly, without access to semantic or discourse information, approach (1) performs the
worst, illustrating the importance of semantic and discourse information for discovering the structure of
surgical procedures.
Human Language Technology
Research Institute
1.
2.
3.
4.
5.
6.
7.
Outline
The problem
The data
Semantic Processing of Operative Notes
Discourse Processing of Operative Notes
Learning the Structure of Surgeries
Experimental Results
Conclusions
Human Language Technology
Research Institute
Conclusions
In this paper, we presented a novel method of learning the structure of a type of
surgical procedure when a corpus of operative notes is available.
 We have shown how this structure can be learned when
a)
semantic information is available in the form of predicate argument
structures that identify knowledge about surgical actions and observations;
b)
neural learning of multi-dimensional embeddings of predicate argument
structures (Pred2Vec) is tried; and
c)
Discourse information indicating the segments of an operative note is
provided.
 We have presented a novel way of semantically representing the surgical steps
and provided a framework of learning the most likely representation of the
surgical steps and their segmentation into the operative notes.
The experimental evaluations on a large corpus documenting two types of surgical
procedures have yielded promising results.
Questions ????
Thank you !!!!
41