Brain modes: Dynamical models, brain- behaviour analysis and data fusion

A Two Day Retreat Nov 24-25, 2007


Berlin Neuroimaging Center
Charitéplatz 1, 10117 Berlin

Saturday & Sunday, November 24-25, 2007


"Brain modes" is a workshop aimed at promoting analysis, understanding and appraisal of a variety of means of reducing complex problems - through dimension reduction (or modal analysis) - to more simple ones. This approach has been used in a variety of applications - data analysis, data fusion, dynamical system modelling - although the core principles are typically identical. In addition to evaluating the use of these techniques for understanding brain function, we will specifically discuss the hypothesis that the brain itself follows such principles in order to solve complex problems and coordinate perception and behaviour - an approach pioneered by Hermann Haken in the 1970s.
We look forward to seeing you at Berlin!


Petra Ritter
Michael Breakspear
Klaas Enno Stephan

Retreat Organisers

Pictures

Confirmed speakers:



Robert Becker 
Berlin Neuroimaging Center, Germany 


Tjeerd Boonstra
School of Psychiatry, UNSW and The Black Dog Institute, Australia 


Michael Breakspear 
School of Psychiatry, UNSW and The Black Dog Institute, Australia 


Andreas Daffertshofer 
Research Institute MOVE, VU University Amsterdam, The Netherlands


Jessica Damoiseaux 
VU University Medical Center, Amsterdam, The Netherlands


Jean Daunizeau 
Wellcome Department of Imaging Neuroscience, UCL, London, UK


Olivier David 
Inserm U836 Grenoble Institute of Neuroscience, Joseph Fourier University, Grenoble, France


Frank Freyer 
Berlin Neuroimaging Center, Germany


Lee Harrison 
Wellcome Department of Imaging Neuroscience, UCL, London, UK


Stuart Knock 
School of Psychiatry, UNSW and The Black Dog Institute, Australia 


Randy McIntosh 
Rotman Research Institute of Baycrest Centre, Toronto, Canada


Andreas Meyer-Lindenberg 
Central Institute of Mental Health Mannheim, Germany


Petra Ritter 
Berlin Neuroimaging Center, Germany


Serge Rombouts 
Leiden Institute for Brain and Cognition, The Netherlands


Cornelis J. Stam 
VU University Medical Center, Amsterdam, The Netherlands


Klaas Enno Stephan 
Wellcome Department of Imaging Neuroscience, UCL, London, UK


Pedro A. Valdes-Sosa
Cuban Neuroscience Center, Cuba




 

ABSTRACTS




Relation between ongoing rhythms and evoked activity

Robert Becker



The relationship between ongoing rhythms (e.g. occipital alpha 8-12 Hz) and the generation of evoked potentials (e.g. VEPs) has been discussed with controversy. While the "evoked theory" assumes independence between VEP generation and the alpha rhythm, the "oscillatory theory" (aka "phase-reset theory") postulates VEP generation to be critically based on phase-resetting of the spontaneous rhythm. Previous experimental results are contradictory, rendering a straight-forward interpretation difficult. Our approach was to theoretically determine the implications of the evoked and oscillatory theory. The resulting model based on the oscillatory theory predicts alpha-band dependent VEP amplitudes but constant phase-locking. The model based on the evoked theory predicts unaffected VEP amplitudes but alpha-band dependent phase-locking degenerating with stronger rhythmic activity. In a next step experimental data was examined where VEPs were assessed in an "eyes open" and "eyes closed" condition. For early components of the EP, findings correspond well with the evoked theory, i.e. EP amplitudes remain unaffected while phase-locking decreases during periods of high alpha activity. Late VEP component amplitudes (>175ms), however, are found to be dependent on pre-stimulus alpha amplitudes. Interestingly, this interaction was not reconcilable with the oscillatory theory, since this VEP amplitude difference was not paralleled by a corresponding difference in alpha-band amplitude in the affected time window neither by increased alpha rhythm phase reset. In summary, by using a model-based approach we identified early VEPs to be compatible with the evoked theory, while results of late VEPs support a modulatory but not causative role - as implied by the oscillatory theory - of spontaneous alpha rhythm w.r.t. EP generation.




 

Multivariate time–frequency analysis of brain activity during motor learning

Tjeerd Boonstra



Neuroimaging techniques such as EEG and MEG data provide high-dimensional data sets of brain activity that are not directly testable with statistics traditionally used in cognitive sciences. Various data operations are used to reduce the dimensionality of the data, such as data selection and averaging, that destroy the structure present in high-dimensional data sets. Recently, a range of techniques has been introduced in neuroscience that exploits these interdependencies and allow for unbiased analysis with high signal-to-noise ratio. In this talk, the application of a few of these techniques will be discussed in the context of a motor learning paradigm. In the study participants learned to perform a complex bimanual motor task, i.e. a bimanual 3:5 polyrhythm, while MEG and EMG data were acquired. Sources of event-related MEG activity were determined by means of synthetic aperture magnetometry that yielded locations and time courses of beta activities. The relationship between changes in performance and corresponding changes in event-related power were assessed using partial least squares. Behavioral data revealed that participants successfully learned to perform the 3:5 polyrhythm and that performance improvement was mainly achieved through the proper timing of the finger producing the slow rhythm. We found event-related modulation of beta power in the contralateral motor cortex that was inversely related to force output. The degree of beta modulation increased during the experiment –although the force level remained constant – and was positively correlated with motor performance, in particular for the motor cortex contralateral to the slow hand. These electrophysiological findings revealed that cortical motor activity switched more fluently between synchronized and desynchronized states over the course of learning, which might provide specific windows in time to communicate with other neural structures.




 

Bimodal and extremum statistics in temporal fluctuations of human EEG 

Michael Breakspear



Spectral analysis of the EEG has traditionally studied only the first moment (i.e.) mean energy density. The nature of fluctuations around the mean offers important potential insights into the nature of the underlying stochastic and dynamic processes. Wavelet decompositions of lengthy EEG recordings allow estimation of the probability distribution functions (PDFs) of these fluctuations. We find that at most temporal scales, such fluctuations can be reasonably well captured by a one-parameter family of PDFs, which show exponential asymptotic gradients and are consistent with an underlying Gaussian process. However, two interesting departures from these distributions are also observed. At many temporal scales, there exists a bias towards a power law scaling at the high energy asymptote. Such large amplitude - or extremum - fluctuations occur sporadically in the time domain in clusters of adjacent time scales. We also observe, predominantly in the alpha (~10Hz) range, the existence of a distinct second mode, with very high characteristic energy, expressed in a discrete bursting fashion and consistent with intermittent destabilization of a low energy ground state. Less distinct modes are also evident at other frequencies. We propose a mechanism for these phenomena, based on the nonlinear and multiscale character of neuronal architecture and physiology and present evidence for these mechanisms in neural field models of corticothalamic activity.




 

Destabilization of motor behavior – reducing complexity of its neural basis 

Andreas Daffertshofer



In this talk recent progresses in identifying fundamental mechanisms of large-scale neuronal activity will be summarized. Key ingredients are concepts of complex dynamics, in particular, so-called phase transitions, that here manifest themselves as qualitative changes between distinct behavioral states. Such transitions are central because in their vicinity the dimension of the system is dramatically condensed. In consequence, multivariate approaches for data reduction like principal or independent component analysis, k-means clustering, etc., converge rapidly. Mathematically, these switches form bifurcations, whose overall structure can be used to pinpoint underlying system's explicit dynamical form, its time scales, and so on. It is well established that changes between different movement patterns are accompanied by changes between large-scale patterns of brain activity when assessed by, e.g., EEG or MEG. Distinct contra- and ipsilateral cortical areas of activation are frequency- and phase-locked to cyclic movements, but their activation strengths change markedly in the vicinity of altered stability in motor performance. For instance, in bimanual performance the left-right phase relation in the iso-frequency case or the left/right frequency ratio in multi-frequency case can be challenged in both the end-effectors (hands or fingers) and in the intra- and interhemispheric cortical interaction. If the performance of an end-effector becomes unstable, movement-related ipsilateral encephalographic activity is increased due to an excitatory interhemispheric cross-talk. That cross-talk is (partly) compensated by intrahemispheric inhibition. It will be shown that improperly timed inhibition may yield a phase transition and movement instabilities. Finally, a dynamical model implementing this account is briefly sketched as it provides next to the description of multi-frequency performance also an explanation for the paramount in- and anti-phase coordination during iso-frequency movements.




 

Resting State Connectivity: applications in aging and dementia.

Jessica S. Damoiseaux, Serge A.R.B. Rombouts 



Normal aging is related to cognitive decline even in the absence of disease. Several theories posit that cognitive deficits in normal aging arise from changes in functional or anatomical connectivity between brain regions, possibly due to white matter (WM) loss. Based on the observation of age-related WM degeneration, O'Sullivan and colleagues (2001) proposed the "disconnection" hypothesis that decline in normal aging emerges from changes in connections between brain areas, in addition to dysfunction of specific areas. The functional properties of brain networks and interactions among brain areas, however, have been difficult to measure. Functional correlation and clustering methods based on spontaneous fluctuations within brain systems using functional MRI (FMRI) provide a powerful approach for investigating network integrity. In a recent study we have applied FMRI and analyzed the data with a clustering technique, showing several active subsystems in the brain at rest. These observed resting-state networks (RSNs) are consistent across subjects and are hypothesized to reflect neural functions that serve to stabilize brain ensembles, consolidate the past, and prepare us for the future. The importance of studying RSNs in the context of clinical exploration has been illustrated by several studies and changes in resting-state activity have been associated with aging and Alzheimer's disease (AD). Here we will present our research into the effects of normal aging and AD on the functional connectivity of intrinsic brain activity (i.e. RSNs) and whether this functional connectivity is related to cognitive function. 


A mesostate-space model for EEG and MEG
J. Daunizeau & K. Friston

We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamic causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.




 

Dynamic causal models and autopoietic systems

Olivier David



Dynamic Causal Modelling (DCM) and the theory of autopoietic systems are two important conceptual frameworks. In this talk, I'll suggest to combine them to study self-organising systems like the brain. DCM has been developed recently by the neuroimaging community to explain, using biophysical models, how the non-invasive brain imaging data are caused by neural processes. It allows one to ask mechanistic questions about how the implementation of cerebral processes. In DCM the parameters of biophysical models are estimated from measured data and the evidence for each model is evaluated. This enables one to test different functional hypotheses (i.e., models) for a given data set. Autopoiesis and related formal theories of biological systems as autonomous machines represent a body of concepts with many successful applications. However, autopoiesis has remained largely theoretical and has not penetrated the empiricism of cognitive neuroscience. In this presentation, I'll try to show the connections that exist between DCM and autopoiesis. In particular, I'll propose a simple modification to standard formulations of DCM that includes autonomous processes. The idea is to exploit the machinery of the system identification of DCMs in neuroimaging to test the face validity of the autopoietic theory applied to neural subsystems. I'll illustrate the theoretical concepts and their implications for interpreting electroencephalographic signals acquired during amygdala stimulation in an epileptic patient.




 

Noninvasive measures of population spikes in humans

Frank Freyer



Functional magnetic resonance imaging (fMRI) delineates human brain activity noninvasively at high spatial resolution and has become a cornerstone of cognitive neuroscience. A current limitation of fMRI is its rooting in activity-coupled changes in blood flow or hemoglobin oxygenation (blood oxygenation level dependent –BOLD– contrast) occurring at a time scale of seconds. Consequently, inferences on underlying neurophysiological events - such as the timing of action potentials (APs) at a millisecond time scale - have remained largely elusive. This "inverse problem" has been addressed in animal studies by combining vascular/metabolic measures with invasive recordings of neuronal low-frequency local field potentials and high-frequency APs. Here, we present a noninvasive approach to combine fMRI with a population measure of APs in human subjects, which is based on somatosensory evoked high-frequency (600 Hz) bursts (HFBs) recorded with surface electroencephalography (EEG) during fMRI data acquisition. The approach proved sensitive for the detection of induced and spontaneous amplitude modulations of two HFB components in the nanovolt-range which have recently been shown to reflect thalamic and cortical population spikes. Consistent with these generator sites spontaneous fluctuations of the early and late HFB components yielded a significant covariation with the fMRI signal in the thalamus and primary somatosensory cortex, respectively. These results indicate that during fMRI the monitoring of human population spikes by means of 600 Hz surface EEG signals is feasible, allowing for a non-invasive identification of rapidly succeeding neuronal processes becomes possible when fMRI is combined with a noninvasive spike measure.




 

Diffusion-based spatial priors for imaging

Lee Harrison



Imaging neuroscience now pervades nearly every aspect of neurobiology; from cognitive psychology to neurogenetics. Its principal strength is the ability to make inferences about structure-function relationships in the brain. However, the prevailing analyses of brain imaging data (statistical parametric mapping), do not support inferences about the spatial aspects of structure or function. This is because they use a mass-univariate approach, which considers each voxel (i.e. point in the brain) separately. An important example here is the organisation of retinotopically mapped responses in visual cortex: all the available evidence suggests that these responses are segregated into distinct cytoarchitectonic areas, with defined boundaries. However, it is currently not possible to infer whether a model with non-stationary smoothness (i.e., with boundaries) of functionally selective responses is better than a model with stationary smoothness (i.e., without boundaries).
The purpose of this work is to provide this evidence using random field theory (RFT) (1) in a probabilistic model to enable hypothesis-driven enquiry into the spatiotemporal causes of measured brain recordings. We propose to do this by integrating graph-based image analysis (2) and Gaussian process priors (GPP) (3) within a hierarchical Gaussian process model (GPM) to provide a principled and powerful framework for the analysis of neuroimages. A proof of concept applying our approach to a random-effects (between-subject) analysis using standard resolution fMRI is reported in (4). These models are, in general, non-Gaussian, as the GPP is embedded on a surface, e.g. cortical mesh, which encodes local geometry. A Bayesian framework allows one to compare models, or hypotheses, framed in terms of effects, e.g. anatomy or activity, that are distributed over voxels.
This work is potentially important because it takes RFT from its conventional application (providing post hoc adjustments to classical p-values) and integrates it into a full probabilistic spatiotemporal model of neuroimaging data and will enable a statistical model to answer questions, with a measured degree of certainty, about the 'texture' and 'shape' of functional responses. These questions have become increasing important in imaging neuroscience, e.g. retinotopic mapping and high-resolution functional magnetic resonance imaging (hr-fMRI) of the superior colliculus (SC) (5), lateral geniculate nucleus (LGN) (6) and fusiform face area (FFA) (7). This last example is important as the correspondence that followed this paper indicated that the simple rules used to evaluate the 'texture' of response were not correctly formulated, leading to serious criticism of some of their results (8, 9). A more suitable analysis would be one that explicitly considers the spatial features, or geometries, of neuronal responses.
We will discuss the motivation and theoretical fundaments of this approach using synthetic data and an application to high resolution (1mm isotropic) retinotopic fMRI data. 

References
1. Adler, R.J., The Geometry of Random Fields. 1981, London: Wiley.
2. Zhang, F. and E.R. Hancock, Image scale-space from the heat kernel. Progress in Pattern Recognition, Image Analysis and Applications, Proceedings, 2005. 3773: p. 181-192.
3. MacKay, D.J.C., ed. Introduction to Gaussian Processes. Neural Networks and Machine Learning ed., ed. C. Bishop. Vol. 168 of NATO ASI Series. 1998, Springer: Berlin. 133-165.
4. Harrison, L., et al., Diffusion-based spatial priors for imaging. Neuroimage, 2007. accepted.
5. Sylvester, R., et al., Visual FMRI responses in human superior colliculus show a temporal-nasal asymmetry that is absent in lateral geniculate and visual cortex. J Neurophysiol, 2007. 97(2): p. 1495-502.
6. Haynes, J.D., R. Deichmann, and G. Rees, Eye-specific effects of binocular rivalry in the human lateral geniculate nucleus. Nature, 2005. 438(7067): p. 496-9.
7. Grill-Spector, K., R. Sayres, and D. Ress, High-resolution imaging reveals highly selective nonface clusters in the fusiform face area. Nat Neurosci, 2006. 9(9): p. 1177-85.
8. Baker, C.I., T.L. Hutchison, and N. Kanwisher, Does the fusiform face area contain subregions highly selective for nonfaces? Nat Neurosci, 2007. 10(1): p. 3-4.
9. Simmons, W.K., P.S. Bellgowan, and A. Martin, Measuring selectivity in fMRI data. Nat Neurosci, 2007. 10(1): p. 4-5.




 

Network analysis of dynamical interdependence in EEG data

Stuart Knock



EEG and MEG data offer the potential to capture the moment-to-moment expression of distributed brain network activity. However, the empirical estimation of such network properties requires multiple analysis steps, each of which can introduce systematic biases or influence overall sensitivity. Thorough knowledge of such factors provides insight into the exact nature and possible causes of non-random brain network activity. With this in mind, we have been undertaking a systematic validation study combining dynamical interdependence and graph theoretical methods to analyse scalp EEG data acquired during a variety of cognitive tasks. The aim of this talk is to stimulate discussion on the appropriateness and effect of several key analysis decisions. Initial decisions include whether to work directly with EEG channels and use a surrogate data step to control for volume conduction or to use a method such as infomax ICA to separate the data into source "brain modes". Further, should the complete waveform be used or can the analysis of band-passed data provide more reliable information about brain activity. The calculation of dynamical interdependence involves a number of decisions such as the length of the future "prediction horizon". Having obtained connection matrices based on dynamical interdependence we then encounter the host of possibilities regarding definitions of metrics for weighted directed graphs and possible normalisations. There exists a hierarchy of increasingly random graphs for the latter step, allowing one to infer whether apparent non-random structure is due to low-order (e.g. edge degree) or high-order (e.g. motif expression) network properties. We will present the progress of our current work to address these questions.




 

Group-wise independent components and its use for evaluating functional connectivity 

Randy McIntosh



The high temporal resolution of electroencephalography (EEG) affords us the opportunity to investigate the dynamics of network interactions by examining task-dependent functional connectivity. A disadvantage of EEG is the spatial smearing of cortical sources recorded at the scalp that introduces a strong spatial autocorrelation confound. An alternative procedure is to use group-wise independent components analysis (group-ICA, Kovacevic & McIntosh, 2007) as a spatiotemporal filter to minimize this confound. The modes identified with group-ICA are temporally independent across the entire time-series, but may still show local dependencies. Moreover, as they are extracted at the group level, the modes represent the most robust patterns that exist in the sample. The examination of task-dependent changes in functional connectivity of the modes indicates enhanced sensitivity to detect these changes, which can also be related to variations in behavior. The linking of functional connections with brain-behavior correlations provides a more complete appreciation of the dynamics that underlie cognitive operations.

 

Nonlinear dynamics and the topology of global connectivity in the human brain

Andreas Meyer-Lindenberg and Danielle S. Perry



Even a cursory review makes it clear that the operation of the brain critically depends on a complex interaction of spatially segregated neural systems. An adequate description of these interactions and an understanding of their nature are therefore an important challenge for neuroscience. While this applies to normal as well as abnormal brain function, a study of the nature of corticocortical interactions will be needed most of all in the study of diseases and conditions in which an alteration of connectivity is assumed to play a prominent role. A case in point is schizophrenia, in which convergent evidence from neuroanatomical, neurophysiological, pharmacological and theoretical studies suggests that a disturbance of cortical connections may play an important role in producing a functionally devastating and characteristic syndrome based on a pathology that is (comparatively) subtle and possibly diffuse (Fletcher, 1998; Friston, 1998; Meyer-Lindenberg et al., 2001; Weinberger et al., 1992). If this is accepted, then a fundamental challenge to any theory of brain function will be a description of the general nature of these cortical interactions. Controversy exists (Wright and Liley 1996, and discussion) as to whether these phenomena may be adequately described as linear or whether they have an important nonlinear, or even chaotic, component. The importance of this question arises from the fact that nonlinear/chaotic systems may exhibit a variety of phenomena such as unpredictability, adaptability, critical phase transitions etc. that make intuitive sense with regard to mental functioning, but are essentially absent in linear systems (Haken 1996). Again, this is especially obvious in schizophrenia, a disease in which sudden and unpredictable transitions in psychopathology are common. Consequently, alterations of nonlinear dynamics in schizophrenia have been described both at the level of psychopathology (Dunki and Ambuhl 1996), overt behavior (Paulus et al. 1996) and electrophysiology (Koukkou et al. 1995, Meyer-Lindenberg et al. 1998), and are regarded by several authors as a critical feature of the alteration of connectivity in this disorder (Friston 1996, 1997, Edelman 1994). Measures from nonlinear dynamics to describe neural activity in brain development, healthy humans and subjects with schizophrenia both untreated and treated with neuroleptic drugs (Meyer-Lindenberg, 1996; Meyer-Lindenberg, 1999; Meyer-Lindenberg, 2003; Meyer-Lindenberg et al., 1998) uncovered evidence for the utility of quantitative descriptors of nonlinear dynamics (such as Lyapunov exponents, or correlation dimension) to classify neural activity. Since these parameters presuppose, but cannot prove, the presence of nonlinear dynamics in the human, these studies prompted an experiment designed to directly demonstrate a nonlinear phase transition in human brain (Meyer-Lindenberg et al., 2002). For this, we employed the bimanual motor coordination paradigm studied by Kelso (Kelso, 1984), used PET to image brain regions differentially affected by movement instability, and then directly perturbed the brain at these sites using transcranial magnetic stimulation, producing a behavioral switch from unstable to stable movement pattern. We have recently extended the characterization of neural dynamics in human brain by using graph theory (Bassett et al., 2007). In this work, the global pattern of connectivity is measured in successive temporal scales using a wavelet decomposition approach and then characterized by methods from graph theory. This work reveals self-similar and small world properties of brain connectivity that link into nonlinear dynamics, and show that the dynamical consequences of these connectivity patterns place human brain function on the "edge of criticality", linking up topology of connectivity across temporal and spatial scales with mechanisms for emergent and adaptive behavior.




 

The resting state: Relations between EEG rhythms and fMRI networks

Petra Ritter



Even in the absence of explicit external or internal stimuli, the brain sustains its activity in a characteristic way. When the temporal fluctuations of cerebral activity in this apparent "resting state" are measured by neuroimaging techniques such as functional magnetic resonance imaging (fMRI), a distinct resting state network (RSN) of confined anatomical structures is revealed. A prominent feature of resting state that can be assessed with the electroencephalogram (EEG) is rhythmic activity in the range of 1 to 100 Hz. Simultaneous recording of EEG and fMRI allows assessing fMRI correlates of these background rhythms. Particularly the alpha (8-12 Hz) and beta (13-29 Hz) rhythms have been extensively evaluated by simultaneous EEG-fMRI. FMRI correlates of both rhythms have been found in anatomical structures that correspond to those of the RSNs. It has further been shown that certain spectral bands, such as alpha, contain multiple functionally distinct EEG rhythms with specific scalp distributions. Based on this knowledge we hypothesized that RSN temporal properties are related to those of the distinct background rhythms. We test this hypothesis on resting state data acquired under eyes closed and eyes open conditions with simultaneous EEG-fMRI. Beyond presentation of interim results we aim at discussing different analysis approaches of integrating data from both modalities. We will talk about the methodological challenges of simultaneous EEG-fMRI acquisition, in particular how residual gradient artifacts and ballistocardiogram can affect results and easily lead to illegitimate inference.




 

Graph theoretical analysis of functional connectivity patterns in the brain

Cornelis J. Stam



Higher brain functions require functional interactions between widely distributed specialized brain areas. Such interactions can be studied by considering the statistical interdependencies between time series of brain metabolism of neural activity recorded from different brain areas, a concept known as 'functional connectivity'. Recently, many sophisticated analytical tools have been developed to study functional connectivity of BOLD, EEG en MEG recordings. However a proper interpretation of the resulting huge datasets, often consisting of large matrices of pair wise estimates of connectivity, presents significant challenges. With the advent of modern network theory, and in particular the discovery of small-world and scale-free networks a new paradigm has become available to study complex functional networks in the brain (Stam and Reijneveld, 2007). Small-world and scale-free networks are realistic models of many real world networks, and are characterized by a high degree of clustering as well as by strong overall integration, signified by small path lengths. Moreover, these network types can emerge through realistic growth scenario's, are remarkably resistant to various types of error, and present an optimal balance between 'wiring cost' and information processing efficiency. We will show how typical functional connectivity data derived from EEG and Meg can be converted to unweighted or weighted graphs, and characterized in terms of graph theory. There is increasing evidence that healthy brain networks are small-world networks. However, the network patterns change during cognitive tasks and different behavioural states. Remarkably, graph theoretical measures of resting state networks show a strong heritability. Finally, studies in Alzheimer's disease, schizophrenia and epilepsy suggest that neuropsychiatric disease may be associated with disturbed network topology more closely resembling random networks.

References: Stam CJ, Reijneveld JC. Graph theoretical analysis of complex networks in the brain. Nonlinear Biomedical Physics 2007; 1: 3 doi: 10.1186/1753-4631-1-3




 

Non-linear Dynamic Causal Models

Klaas Enno Stephan



Models of effective connectivity characterize the influence that neuronal populations exert over another. Additionally, some approaches, for example Dynamic Causal Modeling (DCM) for fMRI or variants of Structural Equation Modeling, model how effective connectivity is modulated by experimental manipulations. Mathematically, both are based on bilinear equations. The bilinear framework, however, precludes an important aspect of neuronal interactions that is known from invasive electrophysiological recording studies; i.e., how the connection between two neural units is enabled or gated by activity in other neural units. These gating processes are critical for controlling the gain of neuronal populations and are mediated through interactions between synaptic inputs to the same dendritic compartment. They represent a critical mechanism for various neurobiological processes, including top-down (attentional) modulation, learning and actions by modulatory transmitters. In this talk, a non-linear extension of DCM is presented that models such processes (to second order) at the neural population level. Both simulation and empirical results will be presented that demonstrate the validity and usefulness of this model.




 

Neural Mass Model based EEG-fMRI fusion

Pedro A. Valdes-Sosa



Neural mass models expressed as random dynamical equations may be fitted to EEEG and/or fMRI data by:
a) Discretising model dynamics by means of the local linearization approximation (Exponential Euler)
b) Kalman Filtering of EEG and fMRI state and observation models to obtain innovations.
c) Estimation of model parameters by maximizing the likelihood obtained from the innovations.

This approach (pioneered by T. Ozaki) has already been applied to neural mass modeling of the EEG (Valdes et al., 1997; Sotero et al, 2006a) , of the fMRI (Sotero et al., 2006b) as well as EEG-fMRI (Riera et al., 2005a,b, Sotero et al., 2007). While basing inference on all the dynamical invariants of the original continuous model, the scenarios considered to date are those of only a single or a very small number of active regions. This limitation is as much due to the paucity of data (relative to the amount of parameters to estimate) as to computational limitations since the exponential Euler technique requires the calculation of the matrix exponential of the Jacobian of the random dynamical system over each time step—a daunting task even using efficient Krylov subspace methods. Additionally traditional models do not address the issue of synchronization within the neural masses imposing simplistic relations between EEG and fMRI.

In this presentation we review some recent methodological advances that overcome these shortcomings. It is shown that the proper framework for this type of modeling is that of random differential-algebraic equations expressed in a canonical form that allows separate integration of each neural mass. Extremely efficient computations are possible with the explicit formulation of a discrete time neural mass model coupled with Kalman Filtering techniques developed for massive data assimilation projects. Finally the discrete time neural mass equations are augmented to model neural synchrony. This framework is applied to analyze real and simulated relations between EEG oscillations and BOLD measurements. Stumbling blocks to apply this type of modeling to spatially continuous neural models will be discussed.