 Research article
 Open Access
 Published:
Contextsensitive autoassociative memories as expert systems in medical diagnosis
BMC Medical Informatics and Decision Making volume 6, Article number: 39 (2006)
Abstract
Background
The complexity of our contemporary medical practice has impelled the development of different decisionsupport aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences.
Methods
We present the contextdependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected lateonset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and nonseptic neonates on the basis of clinical and laboratory findings.
Results
We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geodemographical differences between patient populations. The trained model succeeds in diagnosing lateonset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93,3%; and Cohen's kappa index 0,84.
Conclusion
Contextdependent associative memories can operate as medical expert systems. The model is presented in a simple and tutorial way to encourage straightforward implementations by medical groups. An application with real data, presented as a primary evaluation of the validity and potentiality of the model in medical diagnosis, shows that the model is a highly promising alternative in the development of accuracy diagnostic tools.
Background
The extreme complexity of contemporary medical knowledge together with the intrinsic fallibility of human reasoning, have led to sustained efforts to develop clinical decision support systems, with the hope that bedside expert systems could overcome the limitations inherent to human cognition [1]. Despite the foundational hopes have not been fulfilled [2], the unaltered and increasing necessity for reliable automated diagnostic tools and the important benefit to society brought by any success in this area make every advance valuable.
To further the research on computeraided diagnosis begun in the 1960s, models of neural networks [3] have been added to the pioneering work on artificialintelligence systems. The advent of artificial neural networks with ability to identify multidimensional relationships in clinical data might improve the diagnostic power of the classical approaches. A great proportion of the neural network architectures applied to clinical diagnosis rests on multilayer feedforward networks instructed with backpropagation, followed by selforganizing maps and ART models [4, 5]. Although they perform with significant accuracy, this performance nevertheless remained insufficient to dispel the common fear that they are "blackboxes" whose functioning cannot be well understood and, consequently, whose recommendations cannot be trusted [6].
The associative memory models, an early class of neural models [7] that fit perfectly well with the vision of cognition emergent from today brain neuroimaging techniques [8, 9], are inspired on the capacity of human cognition to build semantic nets [10]. Their known ability to support symbolic calculus [11] makes them a possible link between connectionist models and classical artificialintelligence developments.
This work has three main objectives: a) to point out that associative memory models have the possibility to act as expert systems in medical diagnosis; b) to show in a simple and straightforward way how to instruct a minimal expert system with associative memories; and c) to encourage the implementation of this methodology at large scale by medical groups.
Therefore, in this paper we address – in a tutorial approach – the building of associative memorybased expert systems for the medical diagnosis domain. We favour a comprehensive way and the possibility of a straightforward implementation by medical groups over the mathematical details of the model.
Methods
Contextdependent autoassociative memories with overlapping contexts
Associative memories are neural network models developed to capture some of the known characteristics of human memories [12, 13]. These memories associate arbitrary pairs of patterns of neuronal activity mapped onto real vectors. The set of associated pairs is stored superimposed and distributed throughout the coefficients of a matrix. These matrix memory models are contentaddressable and faulttolerant, and are well known to share with humans the ability of generalization and universalization [14].
In the attempt to overcome a serious problem of these classical models – their impossibility to evoke different associations depending on the context accompanying a same key stimulus Mizraji [15] developed an extension of the model that performs adaptive associations. Contextdependent associations are based on a kind of second order sigmapi neurons [16], and showed an interesting versatility when they were incorporated in modules employed to implement chains of goaldirected associations [17], disambiguation of complex stimuli [18], logical reasoning [19, 20], and multiple criteria classification [21].
A contextdependent associative memory M acting as a basic expert system is a matrix
$\text{M}={\displaystyle \sum _{\text{i}=1}^{\text{k}}{\text{d}}_{\text{i}}({\text{d}}_{\text{i}}\otimes {\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}{)}^{\text{T}}}}\left(1\right)$
where d_{i} are column vectors mapping k different diseases (the set {d} is chosen orthonormal), and s_{j(i)} are column vectors mapping signs or symptoms accompanying the i disease (also an orthonormal set). The sets of symptoms corresponding to each disease can overlap.
The Kronecker product (⊗) between two matrices A and B is another matrix defined by
A ⊗ B = a(i, j)·B (2)
denoting that each scalar coefficient of matrix A, a(i, j), is multiplied by the entire matrix B. Hence, if A is nxm dimensional and B is kxl dimensional, the resultant matrix will have the dimension nkxml.
Note that if d are ndimensional and s are kdimensional vectors, the memory is a rectangular nxnm matrix. Also, the memory M can be viewed as resulting from the Kronecker product (⊗) enlargement of each element of a nxn square autoassociative matrix d_{i} d_{i} ^{T} by a row column representing the sum of corresponding signs and symptoms:
$\text{M}={\displaystyle \sum _{\text{i}=1}^{\text{k}}{\text{d}}_{\text{i}}{\text{d}}_{\text{i}}^{\text{T}}}\otimes {\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}^{\text{T}}}\left(3\right)$
By feeding the contextsensitive autoassociative module M with signs or symptoms, the system retrieves the set of possible diseases associated with such set of symptoms, or a single diagnosis if the criteria suffice.
At resting conditions the system is grounded in an indifferent state g. If each disease was instructed only one time, in the mathematics of the model this implies the priming of the memory with a linear combination in which every disease has an equal weight
$\text{M}(\text{g}\otimes {\text{I}}_{\text{n}\times \text{n}})={\displaystyle \sum _{\text{i}}<{\text{d}}_{\text{i}},\text{g}>{\text{d}}_{\text{i}}}{({\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}})}^{\text{T}}={\displaystyle \sum _{\text{i}}{\text{d}}_{\text{i}}}{({\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}})}^{\text{T}}\left(4\right)$
where $\text{g}={\displaystyle \sum _{\text{i}}{\text{d}}_{\text{i}}}$ and I is the nxn identity matrix. From (4) it is evident that, after the priming, the contextdependent memory becomes a classical memory associating symptoms with diseases. If a set of sufficient concurrent signs and symptoms is presented to the waiting memory (σ = ∑s), after iteration, a final diagnosis results.
It is important to point out that if the sets {s_{j(i)}} corresponding to each disease were disjoint sets, then any single symptom s _{j(i)} would be patognomonical and sufficient to univocally diagnose d_{i}. Otherwise, the output will be a linear combination of possible diseases, each one weighed according to the scalar product between the set of actual symptoms (σ) and the set of symptoms corresponding to each different disease: $\sum _{\text{i}}<{\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}}$, σ > d_{i}. See Figure 1 and its legend. Forcing the sum of scalar products to unity, this output provides a probabilistic mapping of the possible diseases associated with the clinical presentation.
NUMERICAL EXAMPLE
How to instruct the memory
Let us illustrate how to instruct the memory with a minimal numerical example. Consider the set of three diseases and its characteristic symptoms shown in Figure 2. The first task is to codify the sets of signs and diseases with orthogonal vectors, for which we will use the following orthogonal matrices.
$\begin{array}{cc}Diseases& \begin{array}{l}Signs\&\\ symptoms\end{array}\\ \left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]& 0.5*\left[\begin{array}{cccc}1& 1& 1& 1\\ 1& 1& 1& 1\\ 1& 1& 1& 1\\ 1& 1& 1& 1\end{array}\right]\\ \begin{array}{ccc}{d}_{1}& {d}_{2}& {d}_{3}\end{array}& \begin{array}{cccc}{s}_{1}& {s}_{2}& {s}_{3}& {s}_{4}\end{array}\end{array}$
According to the table and equation (1), we instruct the memory by adding a matrix for each disease. For the first disease we have d_{1}d_{1} ^{T} ⊗ (s_{1} + s_{3} + s_{4})^{T}
$\begin{array}{l}\left[\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\otimes [\begin{array}{cccc}1.5& 0.5& 0.5& 0.5\end{array}]=\hfill \\ =\left[\begin{array}{cccccccccccc}1.5& 0.5& 0.5& 0.5& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]\hfill \end{array}$
In the same way we will have two other matrices for the other diseases. The sum of the three matrices constitutes the memory M.
$\text{M}=\left[\begin{array}{cccccccccccc}1.5& 0.5& 0.5& 0.5& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 1.5& 0.5& 0.5& 0.5\end{array}\right]$
How the system works
See also Figure 1 and its legend.
Time step 1
Initial state of the system: Indifferent vector g^{T} = (d_{1} + d_{2} + d_{3}) = [1 1 1]
A first clinical data (s_{3}) arrives: s_{3} ^{T} = [0.5 0.5 0.5 0.5]
Preprocessing of input vectors is performed: h = g ⊗ s_{3}
h^{T} = [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
Resulting associated output: Mh (a linear combination of possible diagnoses)
$\text{output}(1)=\left[\begin{array}{c}1\\ 1\\ 0\end{array}\right]$
Resulting probabilistic map (each coefficient of the output vector is divided by the sum of them all):
$\text{prob}(1)=\left[\begin{array}{c}0.5\\ 0.5\\ 0\end{array}\right]$
Time step 2
A new symptom (s_{2}) arrives: s_{2} ^{T} = [0.5 0.5 0.5 0.5]
Preprocessing of input vectors is performed: h = output(1) ⊗ s_{2}
h^{T} = [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0]
Resulting associated output (Mh):
$\text{output}(2)=\left[\begin{array}{c}0\\ 1\\ 0\end{array}\right]$
Resulting probabilistic map:
$\text{prob}(1)=\left[\begin{array}{c}0\\ 1\\ 0\end{array}\right]$
Final result
The system has arrived to an only final diagnosis that corresponds to disease 2.
REAL DATA APPLICATION – diagnosing lateonset neonatal sepsis
Lateonset sepsis (invasive infection occurring in neonates after 3 days of age) is an important and severe problem in infants hospitalized in neonatal intensive care units (NICUs) [22]. The clinical signs of infection in the newborn are variable, and the earliest manifestations are often subtle and nonspecific. In the presence of a clinical suspicion of sepsis an early and accurate diagnosis algorithm would be of outstanding value but is not yet available [23]. In a recent retrospective study that included 47 neonates with clinical diagnosis of suspected sepsis, Martell and collaborators [24] assessed a group of clinical and laboratory variables – surgical history, metabolic acidosis, hepatomegalia, abnormal white blood cell (WBC) count, hyperglycemia and thrombocytopeniadetermining their sensitivity, specificity, likelihood ratio and posttest probability. Sepsis was defined as a positive result on one or more blood cultures in a neonate with clinical diagnosis of suspected sepsis. A prevalence of 34% was found for their NICU.
We instructed a contextdependent autoassociative memory according to equation (3) with data published in [24] in order to evaluate its capacity to recognize patients with or without sepsis. As a testset, we used 15 cases of suspected neonatal sepsis coming from the same NICU (personal observations of one of usAP). From equation (3) it is clear that the different clinical presentation of the individual cases are added up and resumed in the vector ($\sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}^{\text{T}}$) representing the characteristic signs of each illness condition. We trained the memory instructing two terms d_{i} corresponding to the two final diagnoses of confirmed sepsis and absence of sepsis.
M = [septic] [septic]^{T} ⊗ [attributes _ septic]^{T} + [healthy] [healthy]^{T} ⊗ [attributes _ healthy]^{T}
The column vectors used for the septic and healthy conditions were [1 0]^{T} and [0 1]^{T} respectively.
The column vectors with the attributes corresponding to the septic and nonseptic patients were generated from the available data as follows. For each one of the variables studied in [24] (see Figure 3) we reconstructed the values of the true positive (TP), false positive (FP), true negative (TN) and false negative (FN) number of patients:
TP = sensitivity × E
FP = (sensitivity/LR) × NE
TN = specificity × NE
FN = N  (TP+FP+TN).
These values became the coefficients of the two thirteendimensional column vectors [attributes_septic] and [attributes_healthy]. This procedure is shown in Figure 4. Finally, after normalization, the vectors used for the instruction of the memory M are
[attributes_septic]^{T} = [0.0604 0.4225 0.0604 0.4225 0.1509 0.3320 0.0604 0.0604 0.3621 0.0604 0.4225 0.0604 0.4225]
[attributes_healthy]^{T} = [0.0142 0.4248 0.0142 0.4248 0.0566 0.3823 0.0354 0.0177 0.3859 0.0283 0.4106 0.0283 0.4106].
The memory M resumes the cumulated experience in suspected lateonset sepsis of this particular NICU through the clinical presentations of one year hospitalized neonates.
To test our system we presented to the memory a set of 15 personal clinical observations of neonates with the clinical diagnosis of suspected sepsis hospitalized in the same NICU. We coded the thirteen attributes with the canonical basis vectors (the columns of a 13dimensional identity matrix). For example, the presence of metabolic acidosis was represented with [0 01 0 0 0 0 0 0 0 0 0 0]^{T} and the absence of acidosis with the vector [0 0 01 0 0 0 0 0 0 0 0 0]^{T}. For each patient of the test set we added the vectors corresponding to the confirmed presence or absence of any sign. These 15 vectors representing the clinical presentation of the neonates with the diagnosis of suspected sepsis are shown in Figure 5.
The classification of each patient was obtained as follows:

i)
The vector with the clinical presentation is presented to the memory M. The output, [result_vector], is a linear combination of the vectors septic [1 0]^{T} and healthy [0 1]^{T}:
[result_vector] = M * ([indifferent_vector] ⊗ [clinical presentation])
The [indifferent_vector] is the sum of septic and healthy vectors: [1 1]^{T}.
ii) A diagnosis results from the evaluation of the coefficients of the twodimensional [result_vector]. If the first coefficient is greater than the second the case is classified as sepsis. If the second coefficient is the largest the patient is classified as nonseptic.
Results
A contextdependent memory model acting as a minimal expert system
In this work we show a minimal, contextdependent, memory nucleus able to support diagnostic abilities. Our expert system consists of an autoassociative memory with overlapping contexts and feedback loop that makes the output able to be reinjected into the memory at the next time step (Figure 1).
A memory M acting as a basic expert system is a matrix (equation 3)
$\text{M}={\displaystyle \sum _{\text{i}=1}^{\text{k}}{\text{d}}_{\text{i}}{\text{d}}_{\text{i}}^{\text{T}}}\otimes {\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}^{\text{T}}},$
where the d_{i} are column vectors mapping k different diseases (the set {d} is chosen to be orthonormal), s_{j(i)} are column vectors mapping signs and symptoms accompanying the i disease (also an orthonormal set), and ⊗ is the Kronecker product [25]see Methods. Note that if d are ndimensional vectors (n ≥ k), and s are mdimensional, then d_{i}d_{i} ^{T} are square symmetric matrices, and the memory M is a rectangular matrix of dimensions nxnm.
The instruction of the expert
The cognitive functioning shown by this kind of neural network model is based on the establishment of contextdependent associations. The instruction of the expert therefore consists in the instruction of the memory that stores these associations.
Each disease is instructed to the memory together with its characteristic signs and symptoms (these can include the results of laboratory exams, imaging studies, etc). For this to be done, the first step is to code each disease to be instructed with a different orthonormal vector. The same must be done with the set of signs, symptoms and paraclinical results that could accompany that set of diseases, also coding them with different column vectors of any orthonormal basis of adequate dimension.
Once the signs and symptoms corresponding to each disease have been identified and expressed as orthogonal vectors, the construction of the memory can commence. According to equation (1) this instruction consists in the superposition (the addition) of different rectangular matrices, each one corresponding to a different disease.
The instruction of the memory can be developed along two different paths. a. Learning from the textbook. In this case, the expert is instructed according to the updated academic knowledge of each disease. One first disease is taken, which is coded by the column vector d_{i}, and the outer product of this vector is made by itself (a square matrix is constructed that contains this autoassociation). At the same time, all the signs and symptoms characteristic of this disease are identified and the vectors coding them are added up ($\sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}$). Finally the Kronecker product between the square matrix and the transpose of the vectorsum is performed. An analogous procedure is accomplished for any pathology. Each new resulting rectangular matrix of dimension nxnm is added to the previous ones already stored in memory M (a minimal numerical example is presented in section MethodsHow to instruct the memory). b. Learning by experience. This is a casebased way of instructing the memory. It allows the expert to progressively capture the prevalence of the different diseases in a community. Once finalized the previous instruction, the memory is fed with the actual clinical findings of each particular patient assisted by the physician, attributing this particular constellation of signs and symptoms to the corresponding final diagnosis. The matrices resulting from new patients are progressively added to the memory. This type of representation implies two essential distinctions from the previous learningfromthetextbook memory. Pathologies are not equally weighed in the memory but their representations depend on the frequency of presentation of cases in the population. In addition, for each disease the different symptoms also are not equally weighed: those corresponding to the more frequent clinical presentations will be strengthened.
Medical queries
Once the training phase is finalized, the system is ready to be used. The presentation of a first sign or symptom initiates a medical query. The availability of a new clinical or laboratory finding causes the expert to advance one more step in its diagnostic decision. Although we have many new signs and symptoms, in order to obtain a progressive narrowing of the set of possible diagnoses they must be presented to the expert one per time. At each step, the new data are entered into the memory along with the set of possible diagnoses until that moment. Finally, if the whole set of signs and symptoms available until the moment is sufficient, the system will arrive to a unique diagnosis.
We then follow the system operation. The starting point is when the first clinical data appears. The vector corresponding to this symptom is multiplied by means of the Kronecker product times the vector that represents the set of possible diagnoses (in the starting point it is an indifferent vector). If the memory was instructed with equallyweighed pathologies the indifferent vector is the sum of all the vectors of diseases stored in the memory. If, on the contrary, the memory was instructed on the basis of individual cases, the indifferent vector will be the same linear combination of the vector diseases stored (the weight of each disease corresponds to the one of its frequency of presentation). The resulting column vector is now multiplied by the memory matrix. The exit vector contains either a univocal diagnosis (if the clinical data are sufficient) or a certain linear combination of vectors corresponding to several diseases. If a unique diagnosis was not arrived at, when one has a new sign or symptom, its corresponding vector will enter the memory after making its Kronecker product by the exit vector of the previous step. The process is repeated and stops when a final diagnosis is reached or when new clinical data is not available (see the continuation of the numerical example in section MethodsHow the system works).
Even if at a certain state a final diagnosis has not been reached, the outcome of the system nevertheless represents a probabilistic mapping of the possible diagnoses, each one with its respective probability in agreement with the data available until the moment. In order to obtain such a map in a direct way it is convenient to choose as disease vectors the columns of an identity matrix of suitable dimension. In that case, in each exit vector the positions of the coefficients different from zero mark the different possible diagnoses. Applying a normalization to this exit vector in such a way that the sum of their components is one, the value of each coefficient different from zero represents the probability of each one of those diagnoses. Otherwise, these probabilities can be obtained by multiplying the exit vector by the orthonormal matrix that codifies the diseases.
A reduced model for the diagnosis of lateonset neonatal sepsis
The system described in section Methods classified the patients of the testset (N = 15) as follows (S = sepsis; NS = nonseptic):
$\begin{array}{ccccccccccccccc}1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12& 13& 14& 15\\ S& NS& NS& S& S& S& S& S& NS& NS& S& S& S& S& S\end{array}$
Comparing this classification with the actual illness condition of the patientsshown in Figure 5 – it results that only patient 14 was misdiagnosed. The 2 × 2 table shown in Figure 6 resumes the behaviour of our diagnostic system. The sensitivity was 100% and the specificity 80%. The likelihood ratio (LR = TP/FP) was 5. Using this set of variables as input data, the performance of the system in the classification task can be evaluated as very good. It reached a high accuracy ((TP+TN)/N) of 93.3%, and a Cohen's kappa index of 0.84. (Kappa = (Accuracy  A_chance)/(1A_chance), where A_chance is the accuracy expected by chance. A_chance = (<TP>+<TN>)/N) where <TP> = (TP+FP)(TP+FN)/N and <TN> = (FN+TN)(FP+TN)/N).
Discussion and conclusions
We have shown here that contextdependent associative memories could act as medical decision support systems. The system implies the previous coding of a set of diseases and its corresponding semiologic findings in individual basis of orthogonal vectors. The model presented in this communication is only a minimal module able to evaluate the probabilities of different diagnoses when a set of signs and symptoms is presented to it.
This expert system based on an associative memory shares with programs using artificial intelligence a great capacity to quickly narrow the number of diagnostic possibilities [1]. Also, it is able to cope with variations in the way that a disease can present itself.
A clear advantage of this system is that the probability assignment to the different diagnostic possibilities in any particular clinical situation does not have to be arbitrarily assigned by the specialist, but is automatically provided by the system, in agreement with the acquired experience. In this sense, this neural network model is akin to statistical pattern recognition [1]. However, neither programs based on simple matching strategies nor most used neural network models are able to explain to the physician how they have reached their conclusions. On the contrary, the operation of this system, that unveils the underlying associative structure of human cognition, is transparent. Obviously, it must be understood that this is not the unique mechanism involved in human decision making. The relevant properties of this associative memory model are summarized in Figure 7 in comparison to other neural network models and rulebased artificial intelligence systems.
Beginning with a textbookinstructed memory, the system evolves accommodating (superimposing in the memory) new manifestations of disease gathered over time. This process of continued network education based on empirical evidence leads to databases representative of the different patient populations with its own geodemographical characteristics.
This model can be easily improved in various directions. The functioning of the system described up to now can be considered a passive phase (in the sense that it consists on an automatic evaluation of the available information). By adding another module to the system, consisting of a simple memory that associates diseases with the set of its findings, the expert can enhance its diagnostic performance. Remaining two or three different diagnostic hypothesis within the previous passive phase of diagnosis refinement, this new module can be fed with the vectors mapping each one of these diseases to elicit its associated set of clinical findings. The set of absent features supporting one or the other disease determines what information must be sought next.
Another important expansion of the expert allows giving up the strong assumption that all the findings correspond to a unique disease. Our contextdependent memory stops and gives a null vector when contradictory data are proportioned. To prevent such behaviour, a module akin to a novelty filter could be interposed within the recursion with the following properties: if a vector with only zero coefficients arrives, this module associates the whole set of diseases, avoiding lying aside relevant diagnoses and concurrent pathologies. However, this theme needs further investigation: as for almost every expert system [26], the clustering of findings and their attribution either to only one disease or to several disorders is a major challenge.
The primary implementation of a reduced version of the model with the aim of classifying septic or nonseptic neonates showed the highly satisfactory capacity of the model to be applied to real data. We conclude that contextsensitive associative memory model is a promising alternative in the development of accuracy diagnostic tools. We expect that its easy implementation stimulate groups of medical informatics to develop this expert system at real scale.
References
 1.
Szolovits P, Patil RS, Schwartz WB: Artificial Intelligence in medical diagnosis. Annals of Internal Medicine. 1988, 108: 8087.
 2.
Schwartz WB, Patil RS, Szolovits P: Artificial Intelligence in medicine: Where do we stand?. New England Journal of Medicine. 1987, 316: 685688.
 3.
Arbib MA, Ed: The Handbook of Brain Theory and Neural Networks. 1995, Cambridge, MA: MIT Press
 4.
Cross SS, Harrison RF, Lee Kennedy R: Introduction to neural networks. The Lancet. 1995, 346: 10751079. 10.1016/S01406736(95)917462.
 5.
Lisboa PJG: A review of evidence of health benefit from artificial neural network in health intervention. Neural Networks. 2002, 15: 1139. 10.1016/S08936080(01)001113.
 6.
Baxt WG: Application of artificial neural networks to clinical medicine. The Lancet. 1995, 346: 11351138. 10.1016/S01406736(95)918043.
 7.
Kohonen T: Associative Memory: A SystemTheoretical Approach. 1977, New York: SpringerVerlag
 8.
Friston KJ: Imaging neuroscience: Principles or maps?. Proc Natl Acad Sci USA. 1998, 95: 796802. 10.1073/pnas.95.3.796.
 9.
McIntosh AR: Towards a network theory of cognition. Neural Networks. 2000, 13: 861870. 10.1016/S08936080(00)000599.
 10.
Pomi A, Mizraji E: Semantic graphs and associative memories. Physical Review E. 2004, 70: 06613610.1103/PhysRevE.70.066136.
 11.
Mizraji E: Vector logics: the matrixvector representation of logical calculus. Fuzzy Sets and Systems. 1992, 50: 179185. 10.1016/01650114(92)90216Q.
 12.
Anderson JA, Cooper L, Nass MM, Freiberger W, Grenander U: Some properties of a neural model for memory. AAAS Symposium on Theoretical Biology and Biomathematics. 1972, Milton, WA. Leon N Cooper Publications, [http://www.physics.brown.edu/physics/researchpages/Ibns/Cooper%20Pubs/040_SomePropertiesNeural_72.pdf]
 13.
Cooper LN: Memories and memory: a physicist's approach to the brain. International J Modern Physics A. 2000, 15: 40694082. [http://journals.wspc.com.sg/ijmpa/15/1526/S0217751X0000272X.html]
 14.
Cooper LN: A Possible Organization of Animal Memory and Learning. Proceedings of the Nobel Symposium on Collective Properties of Physical Systems. Edited by: Lundquist B & S. 1973, New York: Academic Press
 15.
Mizraji E: Contextdependent associations in linear distributed memories. Bulletin Math Biol. 1989, 51: 195205.
 16.
ValleLisboa JC, Reali F, Anastasía H, Mizraji E: Elman topology with sigmapi units: An application to the modelling of verbal hallucinations in schozophrenia. Neural Networks. 2005, 18: 863877. 10.1016/j.neunet.2005.03.009.
 17.
Mizraji E, Pomi A, Alvarez F: Multiplicative contexts in associative memories. BioSystems. 1994, 32: 145161. 10.1016/03032647(94)900388.
 18.
PomiBrea A, Mizraji E: Memories in context. BioSystems. 1999, 50: 173188. 10.1016/S03032647(99)000052.
 19.
Mizraji E, Lin J: A dynamical approach to logical decisions. Complexity. 1997, 2: 5663. 10.1002/(SICI)10990526(199701/02)2:3<56::AIDCPLX12>3.0.CO;2S.
 20.
Mizraji E, Lin J: Fuzzy decisions in modular neural networks. Int J Bifurcation and Chaos. 2001, 11: 155167. 10.1142/S0218127401002043.
 21.
Pomi A, Mizraji E: A cognitive architecture that solves a problem stated by Minsky. IEEE on Systems, Man and Cybernetics B (Cybernetics). 2001, 31: 729734. 10.1109/3477.956034.
 22.
Stoll BJ, Hansen N, Fanaroff AA, Wright LL, Carlo WA, Ehrenkranz RA, Lemons JA, Donovan EF, Stark AR, Tyson JE, Oh W, Bauer CR, Korones SB, Shankaran S, Laptook AR, Stevenson DK, Papile LA, Poole WK: LateOnset Sepsis in Very Low Birth Weight Neonates: The Experience of the NICHD Neonatal Research Network. Pediatrics. 2002, 110: 285291. 10.1542/peds.110.2.285.
 23.
Rubin LG, Sánchez PJ, Siegel J, Levine G, Saiman L, Jarvis WR: Evaluation and Treatment of Neonates with Suspected LateOnset Sepsis: A Survey of Neonatologists' Practices. Pediatrics. 2002, 110 (4): e4210.1542/peds.110.4.e42.
 24.
Perotti E, Cazales C, Martell M: Estrategias para el diagnóstico de sepsis neonatal tardía. Rev Med Uruguay. 2005, 21: 314320. [http://www.rmu.org.uy/revista/2005v4/art11.pdf]
 25.
Van Loan CF: The ubiquitous Kronecker product. Journal of Computational and Applied Mathematics. 2000, 123: 85100. 10.1016/S03770427(00)003939.
 26.
Szolovits P, Pauker SG: Categorical and probabilistic reasoning in medicine revisited. Artificial Intelligence. 1993, 59: 167180. 10.1016/00043702(93)90183C.
Prepublication history
The prepublication history for this paper can be accessed here:http://www.biomedcentral.com/14726947/6/39/prepub
Acknowledgements
We thank Dr. Eduardo Mizraji for useful comments and Dr. Julio A. Hernández for revision and improvement of the manuscript.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
AP conceived the application of the model to medical diagnosis, drafted the manuscript and carried out the implementation with real data of neonates with suspected lateonset sepsis. FO participated in the elaboration of the numerical examples, computational programs and the discussion of the model. Both authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Pomi, A., Olivera, F. Contextsensitive autoassociative memories as expert systems in medical diagnosis. BMC Med Inform Decis Mak 6, 39 (2006). https://doi.org/10.1186/14726947639
Received:
Accepted:
Published:
Keywords
 Expert System
 Neural Network Model
 Neonatal Intensive Care Unit
 Associative Memory
 Kronecker Product