Objective. The WilsonCowan model depicts a neuron population composed of an excitatory and an inhibitory subpopulation. Both of these subgroups represent largescale neurons that exhibit various oscillation patterns when fired.
All cells have a charge difference across their entire membrane, and this difference generates potential energy of \(V\). In most cell types, the resting potential is resistant to chaos. However, in some cell types (like neurons), variations in the cell’s charge distribution can lead to large membrane potential changes. Such cells are called excitatory.
Neurons use their excitable cell membranes to convey information. Each neuron cell contains a cell body called a somatic cell, a branching tree called a dendrite, and a single extension arm called an axon (Figure 1). Dendrites receive signals and integrate these signals into the body. When the incoming signals permit a response, a pulse generates in the soma’s transmembrane voltage. This pulse is called an action potential, and it diffuses down the cell’s axon and relays a signal to other neurons in the network.
The dendrites are an essential feature in coupling synaptic inputs and managing action potentials. The branch density and patterns of the dendrites (Figure 2) correlate to the neuron’s function. In addition to a dendrite’s unique morphology, dendritic branching is a reoccurring process where neurons form new dendritic trees and branches, which allow them to build new synapses (Figure 3).
Postsynaptic Potentials, also called gradient potentials, are changes in the membrane potential of a chemical synapse. These potentials trigger or inhibit action potentials. Chemicallygated ion channels open on postsynaptic membranes, where sodium and potassium ions diffuse in the opposite directions. Postsynaptic receptors, located at chemical synapses, open ion channels and convert chemical signals into electrical signals. The following model describes the dynamics for the potential \(\mathrm{V}\left(t\right)\) between the inside of the cell and outside of the cell for the membrane current \(\mathrm{I}\left(t\right)\).
\[V(t) = V_\text{inside}(t)  V_\text{outside}(t)\]
\[ \underbrace{\mathbf{I}(t)}_{\text{total}\\ \text{energy}} = \underbrace{C\frac{d}{dt}\mathrm{V}(t)}_{\text{ganglion} \\ \text{energy} \\ \text{capacity}} + \underbrace{\mathrm{I}_i(t)}_{\text{real ion} \\ \text{flow}} \]
When an action potential reaches the axon, it activates calcium channels to induce an inrush of \(\mathrm{Ca}^{2+}\) across the neuron cell’s membrane (Figure 4). These calcium ions cause the release of signaling molecules called neurotransmitters that disperse through the synaptic cleft and attach to receptors on a cell’s dendrites. The distance between the signaling cell’s axon and the postsynaptic dendrite is the synaptic cleft. Neurotransmitter receptors on postsynaptic cells are converted into ion channels. When an electrical gradient triggers these neurotransmitter receptors, the receptor channel complexes cause the target cell’s membrane excitation. In order to activate a sufficient number of these excitatory receptors, the action potential is activated in the cell and transmitted down its axon to the next cell.
In contrast, there are receptor channel complexes with inhibitory effects, in which ions flowing through the channel help restore the system to its resting potential. The assembly of these excitatory and inhibitory signals in the cell body allows various signaling behaviors. Additional diversity is exhibited by the receptorchannels’ dynamic actions, which can respond to signals at different times and for different durations.
Positive ion current directed from inside to outside defines a neuron’s membrane current. Depending on how they affect the membrane potential, chemical synapses are either excitatory or inhibitory. The types and functions of these excitatory and inhibitory cells are the main topics we cover in this study. In the following, we further study these excitatory and inhibitory effects by analyzing Wilson and Cowan’s dynamic, twosystem equation, modeling the two neuronal populations.
The WilsonCowan model represents the dynamics between the spatially confined excitatory and inhibitory subpopulations. Here, \(E(t)\) and \(I(t)\) denote the excitatory and inhibitory population’s instantaneous discharge rate at time \(t\), respectively.
a. Point 1. Logistic Population Dynamics
The logistic growth model is commonly used in biology to model a population’s growth under density dependence. The study assumes that the resting threshold \(\theta\) is the same for all cells in the population. Hence, it holds that the sigmoid, logistic, response function \(\mathcal{S}(x)\) is related to the distribution of synapses. Here, \(a\) and \(\theta\) are parameters for the sigmoid function:
\[\small \mathcal{S}(x) = \frac{1}{1+\exp\left[a(x\theta)\right]} \frac{1}{1+\exp\left({a\theta}\right)}.\]
b. Point 2. Phase Plane Analysis, SteadyState Solutions, and Limit Cycles
Depending on the parameter values chosen, there may be either one, three, or five steadystate solutions to the system of equations. Now, we need to solve the above equation numerically. We will choose some values of their parameters and some initial conditions and plot the solution’s behavior in the state space and in time. We define a stimulus configuration as any choice of constant values for external inputs \(P\) and \(Q\) and investigate necessary dynamical behavior based on relationships between the couplings \(c_1, c_2, c_3, c_4\) and the sigmoid response parameters \(a_e, a_i\) and \(\theta_e, \theta_i\).
The WilsonCowan model represents the dynamics between the spatially confined excitatory and inhibitory subpopulations. Here, \(E(t)\) and \(I(t)\) represent the instantaneous discharge rate of the excitatory and inhibitory population at time \(t\), respectively. We define the variables characterizing the dynamics of a spatially localized neural population as the following.
\(\mathrm{E(t)} = \text{proportion of excitatory cells firing per unit time } t \tag{1}\)
\(\mathrm{I(t)} = \text{proportion of inhibitory cells firing per unit time } t \tag{2}\)
Now, we have the equations \(E(t)\) and \(I(t)\). We assume that the value of these functions at time \((t+\tau)\) will be equal to the proportion of cells, receiving at least threshold excitement at time \(t\), that isn’t sensitive (not refractory).
Assumptions. We are concerned with the behavior of the subpopulations rather than individual cells. Under the following assumptions, we may ignore random spatial interactions:
First, the study obtained the independent expression of the proportion of sensitive cells and the proportion of cells that received at least a threshold stimulation. Relevant biologic assumptions assess when a neuron fires. Consequently, the most simple model must fulfill two conditions:
If the total refractory period has a duration of \(r\) at time \(t\), then the proportion of excitatory cells that are refractory will be given by the following:
\[ \begin{align} \int^t_{tr}{E(t^\prime) \; dt^\prime} \tag{3} \end{align} \]
Therefore, if \(r\) is the length of the refractory period, the proportion of excitatory cells that are sensitive is the fraction of neurons that satisfy condition one at time \(t\):
\[ \begin{align} 1 \int^t_{tr}{E(t^\prime) \; dt^\prime} \tag{4} \end{align} \]
Similar expressions are obtained for the inhibitory subpopulation.
If all cells receive same number of excitatory and inhibitory afferents, then the subpopulation response function \(S(x)\) will take the form:
\[ \begin{align} \mathcal{S}(x) & = \int^{x(t)}_0{D(\theta) \; d \theta} \tag{5} \end{align} \]
where \(D(\theta)\) characterizes the distribution of individual neural thresholds.
In other words, we assume that all cells in a subpopulation have the same threshold \(\theta\) and that there is a combination of the number of afferent synapses per cell. Suppose \(C(w)\) is the synapse distribution function, and \(x(t)\) is the average excitation of each synapse. In that case, we expect all cells with at least \(\theta\) \(x(t)\) synapses to receive sufficient excitation.
If all cells within a subpopulation have the same threshold \(\theta\), then the distribution of the number of afferent synapses per cell characterized by \(C(w)\). Therefore, the subpopulation response function takes the following form:
\[ \begin{align} \mathcal{S}(x) & = \int^{\infty}_{0\mid x(t)}{C(w) \; d w} \tag{5a} \end{align} \]
The study assumes that the resting threshold \(\theta\) is the same for all cells in the population. Hence, it holds that the sigmoid response function is related to the distribution of synapses.
\(\bf \mathcal{S}(x)\)  sigmoidal subpopulation response function 
\(\bf x=x(t)\)  average level of excitation within a subpopulation 
\(\bf D(\theta)\)  distribution of individual neural thresholds 
\(\bf C(w)\)  distribution of the number of afferent synapses per cell 
Population Growth Models. The sigmoid, logistic growth model is commonly used in biology to model the growth of a population under density dependence:
\[\frac{dy}{dt} = \beta y\left(1\frac{y}{K}\right)\]
To represent neurons’ nonlinear behavior, we use a sigmoid function \(S(x)\), taken as characteristic of any subpopulation response function, dependent on two parameters \(a\) and \(\theta\):
\[S(x) = 1/(1 + e^{a (x\theta)}) \tag{6}\]
The response function is sigmoidal if \(S(x)\) monotonically increases on the interval (\(\infty\), +\(\infty\)), approaches the asymptotic states zero and one as \(x\) approaches –\(\infty\) and +\(\infty\) respectively, and has one inflection point.
Sufficient excitation. To see if condition 2 is fulfilled, we need the total input to the subpopulation to be a weighed contribution:
\[c_1 E(t)  c_2 I(t) + P(t)\]
We call \(S_e\) the response function, also called the inputfrequency characteristic of the excitatory neurons, and correspondingly \(S_i\) for the inhibitory ones. A spike can still help eliciting a new spike in a downstream neuron even a few milliseconds later. The response functions give the expected proportion of those receiving at least threshold stimulation as a function of the population’s average excitation levels. The sigmoid, nonlinear functions \(S_e\), and \(S_i\) take the form:
The inhibitory potential is when hyperpolarization brings about a net negative charge, so the potential is further away from zero. The excitation potential occurs during the depolarization process, so the potential is closer to the excitation threshold. Here is an expression for the average level of excitation generated in the cell at time t:
\[ \begin{align} \int^t_{\infty} \alpha (tt^\prime) [c_1E (t^\prime)  c_2I (t^\prime) + P(t^\prime)] dt^\prime \end{align} \tag{7} \]
Here, we define the parameters \(c_1, c_2>0\) as the connection coefficients, representing the average number of excitatory and inhibitory synapses per cell. We denote \(P(t)\) and \(Q(t)\) as the external input of excitatory and inhibitory subgroups, respectively. The inhibitory subgroup will use similar expressions but with different coefficients and different external inputs.


The different coefficients reflect the difference in axon and dendritic geometry between excitatory and inhibitory cell types. In contrast, variations in external inputs assume the existence of cell types specific to the population. We can illustrate the diversity in axon and dendritic geometry between excitatory and inhibitory cell types by drawing several neurons, as shown below.
Overall, the dendrite branch structure is an essential feature in coupling synaptic inputs and managing action potentials in neurons. Each neuron has a unique branch density and pattern: this unique morphology correlates to the neuron’s function.
Suppose the probability that a cell is sensitive is independent of the probability that a cell is excited above the threshold. In that case, we can define the excitatory subpopulation as the following equation:
\[ \begin{align*} \left[{1\int^t_{tr}E(t^\prime) \; dt^\prime}\right] \tag{8} \mathcal{S}_e(x) \delta t \end{align*} \]
We designate this correlation between excitation and sensitivity by the following expression \(\gamma \left[{\int^t_{tr}E(t^\prime) \; dt^\prime,\mathcal{S}_e(x)}\right],\) so that the previous expression becomes:
\[ \small \begin{align} &\left[ 1\int^t_{tr}E(t^\prime) \; dt^\prime \right] \\ &\qquad \qquad {\Large{\bf \cdotp}} \mathcal{S}_e(x) \left\{1\gamma \left[{\int^t_{tr}E(t^\prime) \; dt^\prime, \mathcal{S}_e(x)}\right] \right \} \delta t \end{align} \tag{9} \]
Due to fluctuations inherent in average excitation & cell thresholds, the parameter \(\gamma\) is taken to be zero: \(\gamma = 0\)
It follows that the equations governing the dynamics of a localized population of neurons take the form of the following expressions:
\[ \begin{align*} \mathbf{E}(t+\tau) =& {\left[ 1\int^t_{tr}E(t^\prime) \; \mathrm{d}t^\prime \right ]} \\ &\quad {\Large{\bf \cdotp}} \; \mathcal{S}_e \left\{ \int^t_{\infty} \alpha(tt^\prime) [c_1 E(t^\prime)  c_2 I(t^\prime) + P(t^\prime)] \mathrm{d}t^\prime \right\} , \tag{10} \end{align*} \]
\[ \begin{align} \mathbf{I}(t+\tau^\mathrm{\prime}) =& {\left[ 1\int^t_{tr^\mathrm{\prime}}I(t^\prime) \; \mathrm{d}t^\prime \right ]} \\ &\quad {\Large{\bf \cdotp}} \; \mathcal{S}_i \left\{ \int^t_{\infty} \alpha(tt^\prime) [c_3 E(t^\prime)  c_4 I(t^\prime) + Q(t^\prime)] \mathrm{d}t^\prime \right\} , \tag{11} \end{align} \]
for the excitatory and inhibitory subpopulations. The parameters \(\tau\) and \(\tau^{\prime}\) represent the response delays, after which cells at time \(t\) will be firing.


We are interested in the coarsegrained transitory behavior of the neural activity. Coarsegrained modeling aims to simulate the behavior of complex systems using simplified representations. In the above equation, we can get rid of the integral and multiply the stimulus by a constant describing the length of time influence. Consider the following application of time coursegraining of equation E(t):
\[ \begin{align} \mathbf{E}(t+\tau) =& {\left[ 1r\bar E(t)\right ]} \\ &\quad {\Large{\bf \cdotp}} \; \mathcal{S}_e \left[ {kc_1 \bar E  c_2 k \bar I + kP(t)} \right] , \end{align} \]
Now, assuming that \(\alpha(tt^{\prime})\) decays exponentially and excludes inhibitory interactions, we extend the equation to the lowest term in \(\tau\):
\[ \begin{align} \tau \frac{d \bar E}{dt} &= \bar E + \left[1\int^t_{tr}E(t^{\prime})dt^{\prime}\right] \mathcal{S}_e \left\{ \int^t_{\infty} e^{\alpha(tt^{\prime})} \left[c_1E(t^\prime) + P(t^\prime)\right]dt^\prime \right\} \end{align} \]
Now, we perform a Taylor series expansion about \(\tau=0\), which is the retention of the linear term:
\[ \begin{align} \tau \frac{d \bar E}{dt} &= \bar E + (1r \bar E) \mathcal{S}_e \left[{k c_1 \bar E  c_2k\bar{I}+kP(t)} \right] \end{align} \]
We can say that the activity at time \(d + dt\) depends on the simultaneous satisfaction of conditions (1) and (2):
\[ E(t + dt) = \left(1 rE\left(t\right)\right)\, S_e \, \left(kc_1 E\left(t\right)  kc_2 I\left(t\right) + kP\left(t\right)\right) \],
For physiological significant values of \(\alpha\) and \(r\), the course grained equations are valid.
After the above reductions and assumptions, turning the original system in differential form and suitably rescaling \(S_e\) and \(S_i\), we reach a coupled, nonlinear, differential equation for the excitatory and inhibitory populations’ firing rates.
\[\tau_e \frac{dE}{dt} = E + (k_e  r_e E) \, S_e(c_1 E  c_2 I + P)\]
\[\tau_i \frac{dI}{dt} = I + (k_i  r_i I) \, S_i(c_3 E  c_4 I + Q),\]
where we define \(\tau_e\) and \(\tau_i\) as the time constants, \(k_e\) and \(k_i\) as the nondimensional constants, \(r_e\) and \(r_i\) as the constants describing the length of the refractory periods, \(S_e\) and \(S_i\) as sigmoid functions representing the nonlinearity of the interactions, \(c_{1,2,3,4}\) as the parameters representing the strength of the excitatory to inhibitory interactions, and \(P\) and \(Q\) as the external inputs to the excitatory and inhibitory populations respectively.


The System’s Resting State. The point \((E=0, I=0)\) is the system’s resting state, which we require to be a stable fixed point. The resting state’s mathematical result is that \(E=0, I=0\) must be a steadystate solution for \(P(t)=Q(t)=0\), i.e., in the vacancy of external inputs. This can be accomplished by transforming \(S_e\) and \(S_i\) so that \(S_e(0)=0\) and \(S_i(0)=0\). Thus, we subtract \(S(0)\) from the original function. We have that the maximum values of \(S_e\) and \(S_i\) are less than 1, which we denote as \(k_e\) and \(k_i\). Furthermore, the resting state must be stable to be of physiological significance.
\[ \begin{align} \tau_e\frac{dE}{dt} &= E + (k_e  r_e E)S_e(c_1 E  c_2 I + P),\tag{12} \\ \tau_i\frac{dI}{dt} &= I + (k_i  r_i I)S_i(c_3 E  c_4 I + Q) \tag{13} \end{align} \]
We can write the equations for the nullclines corresponding to \(dE/dt=0\) and \(dI/dt=0\). Since \(S_e\) and \(S_i\) are both sigmoidal, the functions have unique inverses. Hence, we can equate the nullclines to the following expressions:
\[ \begin{align} c_2 \mathbf{I} &= c_1 \mathbf{E} + S_e^{1}\left( \frac{E}{k_er_eE}\right) +P ,\quad \frac{dE}{dt}=0 \tag{14} \\ c_3 \mathbf{E} &= c_4 \mathbf{I} + S_i^{1}\left( \frac{I}{k_ir_iI}\right)  Q ,\quad \frac{dI}{dt}=0 \tag{15} \end{align} \]
Depending on the parameter values chosen, there may be either one, three, or five steadystate solutions to the system of equations.
Defining Parameters. Now, we need to solve the above equation numerically. We will choose some values of their parameters and some initial conditions and plot the solution’s behavior in the state space and in time. We define a stimulus configuration as any choice of constant values for external inputs \(P\) and \(Q\) and investigate necessary dynamical behavior based on relationships between the couplings \(c_1, c_2, c_3, c_4\) and the sigmoid response parameters \(a_e, a_i\) and \(\theta_e, \theta_i\). Here, \(a\) and \(\theta\) are parameters for the sigmoid function:
\[\mathcal{S}(x) = \frac{1}{1+\exp\left[a(x\theta)\right]} \frac{1}{1+\exp\left({a\theta}\right)}.\]
No particular significance is associated with our choice for the logistic curve; any other function with the characteristic sigmoid properties would likewise be suitable.


In the following, we define the couplings as \(c_1 = 12, c_2 = 4, c_3 = 13, c_4 = 11\); the refractory periods as \(r_e = 1, r_i = 1\); and the external inputs as \(P = 0, Q = 0\).
In our numerical search of the steadystate points, we study the cases where the derivatives of \(E\) and \(I\) are zero. Since the system is highly nonlinear, we have to do it numerically. First, we draw the flow field and then draw the nullclines of the system. The \(x\)nullclines are defined by \(f(x,y)=0\), and the \(y\)nullclines are defined by \(g(x, y)=0\). These are the locations where \(x\) and \(y\) do not change with time.
Another use of nullclines becomes evident; the locations where \(x\) and \(y\)nullclines intersect are the equilibrium points.
To determine the maximum slope of the curve \(I\) as a function of \(E\), we set a condition that the slope of Equation 14 at the inflection point of \(S_e^{1}\) is positive. The slope of the isoclinic line at this point is defined as follows:
\[\left(\frac{c_1}{c_2}\frac{9}{a_ec_2}\right)\]
Theorem 1. A stimulus configuration is defined as any choice of constant values for P and Q. If
\[c_1>9/a_e \tag{16}\]
then there is a class of stimulus configurations where the nullcline equations \(14\) and \(15\) will have at least three intersections. In other words, the equations \(12\) and \(13\) will have at least three steadystate solutions. Here, \(a_e\) is the slope parameter for the excitatory response function.
To simplify obtaining the result of Condition 16, we set the refractory period of \(r_e\) and \(r_i\) to be equal. Without losing any meaning, we use the values of \(r_e\) and \(r_i\) in the rest of the study.
We can see that the system has three steadystate solutions, thus satisfying Theorem 1. The phase plane diagram exhibits a stable fixed point at (x,y) = \((0.0285, 0.0644)\), an unstable saddle at (x,y) = \((0.0558, 0.1438)\), and a stable fixed point at (x,y) = \((0.2408, 0.4555)\). Therefore, we can always choose between \(P\) and \(Q\), so there are three intersection points. The physiologic significance of Theorem 1 illustrates that \(1/a_e\), where \(a_e\) is the excitatory sigmoid response parameter, directly relates to the variance of the distribution of thresholds or synaptic connections.
Theorem 2. Let the parameters of a neural population satisfy the following condition:
\[ \begin{align} \frac{a_e c_2}{a_e c_1 9} &> \frac{a_i c_4 + 9}{a_i c_3} \tag{17} \end{align} \]
Then, there will be five steady states for a particular choice of stimulus configuration, though not simultaneously. This theorem is not sufficient to solve multiple hysteresis.
The physiological significance of Theorem 2 shows that \(c_2\) and \(c_3\) measure the strength of negative feedback. Therefore, these five steadystate solutions require a powerful negative feedback loop.
There is another type of transitory behavior exhibited by our model: a limit cycle. If there is only one steadystate solution, and if the steadystate solution is unstable, a limit cycle will occur. We can use linear stability to show an adequate, but not required, condition for such a steady state’s instability:
\[ \begin{align} c_1a_e \gt c_4 a_i + 18 \tag{18} \end{align} \]
The above conditions come from the instability requirements of the limit cycle. We can explain that in terms of periodic behavior, the interaction with the excitatory population is significantly greater than the interaction with the inhibitory population.
For some selected P and Q, there is a condition such that there is a single steadystate point where the values of \(E\) and \(I\) are close to the response function’s inflection point, resulting in the following conditions:
\[ \begin{align} \frac{a_e c_2}{a_e c_1 9} &> \frac{a_i c_4 + 9}{a_i c_3} \tag{19} \\ \frac{a_e c_2}{a_e c_1 9} &> \frac{a_i c_4 + 9}{a_i c_3} \tag{20} \end{align} \]
Requirement 19 is identical to condition 17. Requirement 20 ensures that there is one steady state instead of five steadystates.
Theorem 3. Let’s choose the parameters so that requirement 19 is satisfied. If expression 18 is not satisfied, multiple hysteresis phenomena will occur for certain excitation configurations. On the other hand, if requirements 18 and 20 are met, we will obtain the limit cycle dynamics for certain stimulation configurations.
Nonlinear systems can exhibit a type of behavior known as a limit cycle. If there is only one steadystate solution, and if the steadystate solution is unstable, a limit cycle will occur. In the following, we define the parameters for satisfying conditions 18 and 20 as \(c_1=16\), \(c_2 = 12\), \(c_3=15\), \(c_4=3\), \(a_e = 1.3\), \(\theta_e=4\), \(a_i=2\), \(\theta_i = 3.7\), \(r_e=1\) and \(r_i=1\). We can determine a steadystate solution by the intersection of the nullclines as follows.
In the phase plane, the limit cycle is an isolated closed orbit, where “closed” means the periodicity of movement, and “isolated” means the limit of motion, where nearby trajectories converge or deviate. We can alter our initial values of \(E_0\) and \(I_0\) to obtain different paths in the phase space.
The phase plane analysis illustrates a bounded steadystate solution that is classified as unstable; this is a typical feature of a limit cycle. The solution’s oscillating behavior, shown in Figure 10, follows typical limit cycle behavior:
i. Trajectories near the equilibrium point are pushed further away from the equilibrium.
ii. Trajectories far from the equilibrium point move closer toward the equilibrium.
We established the resting state \(E=0, I=0\) to be stable in the absence of an outside force. Therefore, the neural population only exhibits limit cycle activity in response to a constant external input (P or Q). All in all, the premise of Theorem 3 is that there is only one steadystate, and it must be near the inflection point for a limit cycle to exist. Therefore, if we study the limit behavior as a function of \(P\), where \(Q=0\), then it follows that:
This study’s results directly followed from the analysis of distinct excitatoryinhibitory neural populations and illustrated the existence of multiple hysteresis loops and limit cycles. By treating the inhibitory effects of inhibitory neurons separately, it is possible to show the biggest difference between this study and previous work by distinguishing excitatory and inhibitory neuronal populations. So, this study dealt with two distinct populations, thereby requiring the use of the two variables \(E(t)\) and \(I(t)\) to characterize the population’s state.
Wilson and Cowan’s model is a system of coupled, nonlinear, the differential equation for the firing rates of the excitatory and inhibitory populations. A large number of neurons make up the neural system. Under the control of external stimulation and bifurcation parameters, these networks exhibit diverse dynamics and flow patterns. The WilsonCowan study models the excitatory and inhibitory effects in syntactically coupled neuronal systems. By employing two differential equations, the system becomes a “coursegrained” depiction of a neuron network’s activity. The model’s primary parameters include the correlation between the excitatory and inhibitory populations and the external stimulant allotted to each subpopulation. This study shows that varying these parameters generates various dynamical behaviors that correspond to motion in the brain.
The logistic growth model is commonly used in biology to model a population’s growth under density dependence. The study assumed that the resting threshold \(\theta\) is the same for all cells in the population. Hence, it held that the sigmoid, logistic, response function \(\mathcal{S}(x)\) is related to the distribution of synapses. Here, \(a\) and \(\theta\) are parameters for the sigmoid function:
\[\mathcal{S}(x) = \frac{1}{1+\exp\left[a(x\theta)\right]} \frac{1}{1+\exp\left({a\theta}\right)}.\]
For the Wilson Cowan model, we predicted the potential using the sum of the excitatory and inhibitory populations’ activities. We can then use dynamical system methods such as phase plane, eigenvalue, and bifurcation analysis for both subpopulations.
Depending on the parameter values chosen, there may be either one, three, or five steadystate solutions to the system of equations. We chose some values of their parameters and some initial conditions and plotted the solution’s behavior. We defined a stimulus configuration as any choice of constant values for external inputs \(P\) and \(Q\) and investigated necessary dynamical behavior based on relationships between the couplings \(c_1, c_2, c_3, c_4\) and the sigmoid response parameters \(a_e, a_i\) and \(\theta_e, \theta_i\).
[1] EdelsteinKeshet, L. (2005). Mathematical models in biology. Society for Industrial and Applied Mathematics, Philadelphia.
[2] Murray, J. D. (2002). Mathematical biology. Springer, New York.
[3] Rinzel, J. and Ermentrout, G. B. (1998). Analysis of neural excitability and oscillations. Methods in neuronal modeling 2 251–92.
[4] Segel, L. A. (1984). Modeling dynamic phenomena in molecular and cellular biology. Cambridge University Press, Cambridge ; New York.
[5] Strogatz, S. H. (1994). Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering. Perseus Books Publishing, L.L.C., Reading, Massachusetts.
[6] Wilson, H. R. and Cowan, J. D. (1972). Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons. Biophysical Journal 12 1–24.
[7] Winfree, A. T. (1980). The Geometry of Biological Time. Springer Berlin Heidelberg, Berlin, Heidelberg.
[8] Ingalls, B. (2012). Mathematical Modelling in Systems Biology: An Introduction. University of Waterloo.