diff --git a/07-basic_statistics.qmd b/07-basic_statistics.qmd
index 2503c46e41fba537343d5f75268f3e3aeef28c2b..4490349d7f15198aa63c4c52c5bbb7cd2a985096 100644
--- a/07-basic_statistics.qmd
+++ b/07-basic_statistics.qmd
@@ -4,7 +4,7 @@ bibliography: references.bib
 
 # Basic statistics for spatial analysis
 
-This section aims at providing some basic statistical tools to study the spatial distribution of epidemiological data. If you wish to go further into spatial statistics applied to epidemiology and their limitations you can consult the tutorial "[Spatial Epidemiology](https://mkram01.github.io/EPI563-SpatialEPI/index.html)" from M. Kramer from which the statistical analysis of this section was adapted. 
+This section aims at providing some basic statistical tools to study the spatial distribution of epidemiological data. If you wish to go further into spatial statistics applied to epidemiology and their limitations you can consult the tutorial "[Spatial Epidemiology](https://mkram01.github.io/EPI563-SpatialEPI/index.html)" from M. Kramer from which the statistical analysis of this section was adapted.
 
 ## Import and visualize epidemiological data
 
@@ -191,7 +191,8 @@ Under the Moran's test, the statistics hypotheses are:
 We will compute the Moran's statistics using `spdep`[@spdep] and `Dcluster`[@DCluster] packages. `spdep` package provides a collection of functions to analyze spatial correlations of polygons and works with sp objects. In this example, we use `poly2nb()` and `nb2listw()`. These functions respectively detect the neighboring polygons and assign weight corresponding to $1/\#\ of\ neighbors$. `Dcluster` package provides a set of functions for the detection of spatial clusters of disease using count data.
 
 ```{r MoransI, eval = TRUE, echo = TRUE, nm = TRUE, fig.width=8, class.output="code-out", warning=FALSE, message=FALSE}
-
+#install.packages("spdep")
+#install.packages("DCluster")
 library(spdep) # Functions for creating spatial weight, spatial analysis
 library(DCluster)  # Package with functions for spatial cluster analysis
 
@@ -225,8 +226,7 @@ For each district $i$, the Local Moran's I statistics is:
 $$I_i = \frac{(Y_i-\bar{Y})}{\sum_{i=1}^N(Y_i-\bar{Y})^2}\sum_{j=1}^Nw_{ij}(Y_j - \bar{Y}) \text{ with }  I = \sum_{i=1}^NI_i/N$$
 :::
 
-The `localmoran()`function from the package `spdep` treats the variable of interest as if it was normally distributed. In some cases, this assumption could be reasonable for incidence rate, especially when the areal units of analysis have sufficiently large population count suggesting that the values have similar level of variances. Unfortunately, the local Moran’s test has not been implemented for Poisson distribution (population not large enough in some districts) in `spdep` package. However, Bivand **et al.** [@bivand2008applied] provided some code to manually perform the analysis using Poisson distribution and this code was further implemented in the course "[Spatial Epidemiology](https://mkram01.github.io/EPI563-SpatialEPI/index.html)”.
-
+The `localmoran()`function from the package `spdep` treats the variable of interest as if it was normally distributed. In some cases, this assumption could be reasonable for incidence rate, especially when the areal units of analysis have sufficiently large population count suggesting that the values have similar level of variances. Unfortunately, the local Moran's test has not been implemented for Poisson distribution (population not large enough in some districts) in `spdep` package. However, Bivand **et al.** [@bivand2008applied] provided some code to manually perform the analysis using Poisson distribution and this code was further implemented in the course "[Spatial Epidemiology](https://mkram01.github.io/EPI563-SpatialEPI/index.html)".
 
 ```{r LocalMoransI, eval = TRUE, echo = TRUE, nm = TRUE, fig.width=8, class.output="code-out", warning=FALSE, message=FALSE}
 
@@ -274,16 +274,15 @@ Briefly, the process consist on 1) computing the I statistics for the observed d
 
 A conventional way of plotting these results is to classify the districts into 5 classes based on local Moran's I output. The classification of cluster that are significantly autocorrelated to their neighbors is performed based on a comparison of the scaled incidence in the district compared to the scaled weighted averaged incidence of it neighboring districts (computed with `lag.listw()`):
 
--    Districts that have higher-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local $I_i$ statistic are defined as __High-High__ (hotspot of the disease)
+-   Districts that have higher-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local $I_i$ statistic are defined as **High-High** (hotspot of the disease)
 
--   Districts that have lower-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local $I_i$ statistic are defined as  __Low-Low__ (cold spot of the disease).
+-   Districts that have lower-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local $I_i$ statistic are defined as **Low-Low** (cold spot of the disease).
 
--   Districts that have higher-than-average rates in the index regions and lower-than-average rates in their neighbors, and showing statistically significant negative values for the local $I_i$ statistic are defined as  __High-Low__(outlier with high incidence in an area with low incidence).
+-   Districts that have higher-than-average rates in the index regions and lower-than-average rates in their neighbors, and showing statistically significant negative values for the local $I_i$ statistic are defined as **High-Low**(outlier with high incidence in an area with low incidence).
 
--   Districts that have lower-than-average rates in the index regions and higher-than-average rates in their neighbors, and showing statistically significant negative values for the local $I_i$ statistic are defined as  __Low-High__ (outlier of low incidence in area with high incidence). 
-
--   Districts with non-significant values for the $I_i$ statistic are defined as __Non-significant__.
+-   Districts that have lower-than-average rates in the index regions and higher-than-average rates in their neighbors, and showing statistically significant negative values for the local $I_i$ statistic are defined as **Low-High** (outlier of low incidence in area with high incidence).
 
+-   Districts with non-significant values for the $I_i$ statistic are defined as **Non-significant**.
 
 ```{r LocalMoransI_plt, eval = TRUE, echo = TRUE, nm = TRUE, fig.width=8, class.output="code-out", warning=FALSE, message=FALSE}
 
@@ -319,13 +318,11 @@ mf_layout(title = "Cluster using Local Moran's I statistic")
 
 ```
 
-
-
 ### Spatial scan statistics
 
 While Moran's indices focus on testing for autocorrelation between neighboring polygons (under the null assumption of spatial independence), the spatial scan statistic aims at identifying an abnormal higher risk in a given region compared to the risk outside of this region (under the null assumption of homogeneous distribution). The conception of a cluster is therefore different between the two methods.
 
-The function `kulldorff` from the package `SpatialEpi` [@SpatialEpi] is a simple tool to implement spatial-only scan statistics. 
+The function `kulldorff` from the package `SpatialEpi` [@SpatialEpi] is a simple tool to implement spatial-only scan statistics.
 
 ::: callout-note
 ##### Kulldorf test
@@ -335,10 +332,8 @@ Under the kulldorff test, the statistics hypotheses are:
 -   **H0**: the risk is constant over the area, i.e., there is a spatial homogeneity of the incidence.
 
 -   **H1**: a particular window have higher incidence than the rest of the area , i.e., there is a spatial heterogeneity of incidence.
-
 :::
 
-
 Briefly, the kulldorff scan statistics scan the area for clusters using several steps:
 
 1.  It create a circular window of observation by defining a single location and an associated radius of the windows varying from 0 to a large number that depends on population distribution (largest radius could include 50% of the population).
@@ -349,13 +344,12 @@ Briefly, the kulldorff scan statistics scan the area for clusters using several
 
 4.  These 3 steps are repeated for each location and each possible windows-radii.
 
-
 While we test the significance of a large number of observation windows, one can raise concern about multiple testing and Type I error. This approach however suggest that we are not interest in a set of signifiant cluster but only in a most-likely cluster. This **a priori** restriction eliminate concern for multpile comparison since the test is simplified to a statistically significance of one single most-likely cluster.
 
 Because we tested all-possible locations and window-radius, we can also choose to look at secondary clusters. In this case, you should keep in mind that increasing the number of secondary cluster you select, increases the risk for Type I error.
 
 ```{r spatialEpi, eval = TRUE, echo = TRUE, nm = TRUE, class.output="code-out", warning=FALSE, message=FALSE}
-
+#install.packages("SpatialEpi")
 library("SpatialEpi")
 
 ```
@@ -387,7 +381,6 @@ kd_Wfever <- kulldorff(district_xy,
 
 The function plot the histogram of the distribution of log-likelihood ratio simulated under the null hypothesis that is estimated based on Monte Carlo simulations. The observed value of the most significant cluster identified from all possible scans is compared to the distribution to determine significance. All outputs are saved into an R object, here called `kd_Wfever`. Unfortunately, the package did not develop any summary and visualization of the results but we can explore the output object.
 
-
 ```{r kd_outputs, eval = TRUE, echo = TRUE, nm = TRUE, fig.width=6, class.output="code-out", warning=FALSE, message=FALSE}
 names(kd_Wfever)
 
@@ -465,8 +458,3 @@ In this example, the expected number of cases was defined using the population c
 
 In addition, this cluster analysis was performed solely using the spatial scan but you should keep in mind that this method of cluster detection can be implemented for spatio-temporal data as well where the cluster definition is an abnormal number of cases in a delimited spatial area and during a given period of time. The windows of observation are therefore defined for a different center, radius and time-period. You should look at the function `scan_ep_poisson()` function in the package `scanstatistic` [@scanstatistics] for this analysis.
 :::
-
-
-
-
-
diff --git a/img/dist_filter_1.png b/img/dist_filter_1.png
index e11d04497a5dd79cb75c5f49140e6855f14397a7..72efd1a6323904b0bad518a240ff8d58315d1ac5 100644
Binary files a/img/dist_filter_1.png and b/img/dist_filter_1.png differ
diff --git a/img/dist_filter_2.png b/img/dist_filter_2.png
index a31388d9d9ce499fc265419d80913c158fa8e6fa..c79243c5c213571b4308fd8bafd0f9dcc9cadfbd 100644
Binary files a/img/dist_filter_2.png and b/img/dist_filter_2.png differ
diff --git a/public/07-basic_statistics.html b/public/07-basic_statistics.html
index 0716ef86fb4b8aa24e348a6260e729f695992834..326a724407fc8e218595bc6d59018262b3dbbf89 100644
--- a/public/07-basic_statistics.html
+++ b/public/07-basic_statistics.html
@@ -464,21 +464,23 @@ Moran’s I test
 </div>
 <p>We will compute the Moran’s statistics using <code>spdep</code><span class="citation" data-cites="spdep">(<a href="references.html#ref-spdep" role="doc-biblioref">R. Bivand et al. 2015</a>)</span> and <code>Dcluster</code><span class="citation" data-cites="DCluster">(<a href="references.html#ref-DCluster" role="doc-biblioref">Gómez-Rubio et al. 2015</a>)</span> packages. <code>spdep</code> package provides a collection of functions to analyze spatial correlations of polygons and works with sp objects. In this example, we use <code>poly2nb()</code> and <code>nb2listw()</code>. These functions respectively detect the neighboring polygons and assign weight corresponding to <span class="math inline">\(1/\#\ of\ neighbors\)</span>. <code>Dcluster</code> package provides a set of functions for the detection of spatial clusters of disease using count data.</p>
 <div class="cell" data-nm="true">
-<div class="sourceCode cell-code" id="cb9"><pre class="sourceCode r code-with-copy"><code class="sourceCode r"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(spdep) <span class="co"># Functions for creating spatial weight, spatial analysis</span></span>
-<span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(DCluster)  <span class="co"># Package with functions for spatial cluster analysis</span></span>
-<span id="cb9-3"><a href="#cb9-3" aria-hidden="true" tabindex="-1"></a></span>
-<span id="cb9-4"><a href="#cb9-4" aria-hidden="true" tabindex="-1"></a>queen_nb <span class="ot">&lt;-</span> <span class="fu">poly2nb</span>(district) <span class="co"># Neighbors according to queen case</span></span>
-<span id="cb9-5"><a href="#cb9-5" aria-hidden="true" tabindex="-1"></a>q_listw <span class="ot">&lt;-</span> <span class="fu">nb2listw</span>(queen_nb, <span class="at">style =</span> <span class="st">'W'</span>) <span class="co"># row-standardized weights</span></span>
-<span id="cb9-6"><a href="#cb9-6" aria-hidden="true" tabindex="-1"></a></span>
-<span id="cb9-7"><a href="#cb9-7" aria-hidden="true" tabindex="-1"></a><span class="co"># Moran's I test</span></span>
-<span id="cb9-8"><a href="#cb9-8" aria-hidden="true" tabindex="-1"></a>m_test <span class="ot">&lt;-</span> <span class="fu">moranI.test</span>(cases <span class="sc">~</span> <span class="fu">offset</span>(<span class="fu">log</span>(expected)), </span>
-<span id="cb9-9"><a href="#cb9-9" aria-hidden="true" tabindex="-1"></a>                  <span class="at">data =</span> district,</span>
-<span id="cb9-10"><a href="#cb9-10" aria-hidden="true" tabindex="-1"></a>                  <span class="at">model =</span> <span class="st">'poisson'</span>,</span>
-<span id="cb9-11"><a href="#cb9-11" aria-hidden="true" tabindex="-1"></a>                  <span class="at">R =</span> <span class="dv">499</span>,</span>
-<span id="cb9-12"><a href="#cb9-12" aria-hidden="true" tabindex="-1"></a>                  <span class="at">listw =</span> q_listw,</span>
-<span id="cb9-13"><a href="#cb9-13" aria-hidden="true" tabindex="-1"></a>                  <span class="at">n =</span> <span class="fu">length</span>(district<span class="sc">$</span>cases), <span class="co"># number of regions</span></span>
-<span id="cb9-14"><a href="#cb9-14" aria-hidden="true" tabindex="-1"></a>                  <span class="at">S0 =</span> <span class="fu">Szero</span>(q_listw)) <span class="co"># Global sum of weights</span></span>
-<span id="cb9-15"><a href="#cb9-15" aria-hidden="true" tabindex="-1"></a><span class="fu">print</span>(m_test)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
+<div class="sourceCode cell-code" id="cb9"><pre class="sourceCode r code-with-copy"><code class="sourceCode r"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a><span class="co">#install.packages("spdep")</span></span>
+<span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a><span class="co">#install.packages("DCluster")</span></span>
+<span id="cb9-3"><a href="#cb9-3" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(spdep) <span class="co"># Functions for creating spatial weight, spatial analysis</span></span>
+<span id="cb9-4"><a href="#cb9-4" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(DCluster)  <span class="co"># Package with functions for spatial cluster analysis</span></span>
+<span id="cb9-5"><a href="#cb9-5" aria-hidden="true" tabindex="-1"></a></span>
+<span id="cb9-6"><a href="#cb9-6" aria-hidden="true" tabindex="-1"></a>queen_nb <span class="ot">&lt;-</span> <span class="fu">poly2nb</span>(district) <span class="co"># Neighbors according to queen case</span></span>
+<span id="cb9-7"><a href="#cb9-7" aria-hidden="true" tabindex="-1"></a>q_listw <span class="ot">&lt;-</span> <span class="fu">nb2listw</span>(queen_nb, <span class="at">style =</span> <span class="st">'W'</span>) <span class="co"># row-standardized weights</span></span>
+<span id="cb9-8"><a href="#cb9-8" aria-hidden="true" tabindex="-1"></a></span>
+<span id="cb9-9"><a href="#cb9-9" aria-hidden="true" tabindex="-1"></a><span class="co"># Moran's I test</span></span>
+<span id="cb9-10"><a href="#cb9-10" aria-hidden="true" tabindex="-1"></a>m_test <span class="ot">&lt;-</span> <span class="fu">moranI.test</span>(cases <span class="sc">~</span> <span class="fu">offset</span>(<span class="fu">log</span>(expected)), </span>
+<span id="cb9-11"><a href="#cb9-11" aria-hidden="true" tabindex="-1"></a>                  <span class="at">data =</span> district,</span>
+<span id="cb9-12"><a href="#cb9-12" aria-hidden="true" tabindex="-1"></a>                  <span class="at">model =</span> <span class="st">'poisson'</span>,</span>
+<span id="cb9-13"><a href="#cb9-13" aria-hidden="true" tabindex="-1"></a>                  <span class="at">R =</span> <span class="dv">499</span>,</span>
+<span id="cb9-14"><a href="#cb9-14" aria-hidden="true" tabindex="-1"></a>                  <span class="at">listw =</span> q_listw,</span>
+<span id="cb9-15"><a href="#cb9-15" aria-hidden="true" tabindex="-1"></a>                  <span class="at">n =</span> <span class="fu">length</span>(district<span class="sc">$</span>cases), <span class="co"># number of regions</span></span>
+<span id="cb9-16"><a href="#cb9-16" aria-hidden="true" tabindex="-1"></a>                  <span class="at">S0 =</span> <span class="fu">Szero</span>(q_listw)) <span class="co"># Global sum of weights</span></span>
+<span id="cb9-17"><a href="#cb9-17" aria-hidden="true" tabindex="-1"></a><span class="fu">print</span>(m_test)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
 <div class="cell-output cell-output-stdout">
 <pre class="code-out"><code>Moran's I test of spatial autocorrelation 
 
@@ -486,14 +488,14 @@ Moran’s I test
     Model used when sampling: Poisson 
     Number of simulations: 499 
     Statistic:  0.1566449 
-    p-value :  0.014 </code></pre>
+    p-value :  0.008 </code></pre>
 </div>
 <div class="sourceCode cell-code" id="cb11"><pre class="sourceCode r code-with-copy"><code class="sourceCode r"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="fu">plot</span>(m_test)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
 <div class="cell-output-display">
 <p><img src="07-basic_statistics_files/figure-html/MoransI-1.png" class="img-fluid" width="768"></p>
 </div>
 </div>
-<p>The Moran’s statistics is here <span class="math inline">\(I =\)</span> 0.16. When comparing its value to the H0 distribution (built under 499 simulations), the probability of observing such a I value under the null hypothesis, i.e.&nbsp;the distribution of cases is spatially independent, is <span class="math inline">\(p_{value} =\)</span> 0.014. We therefore reject H0 with error risk of <span class="math inline">\(\alpha = 5\%\)</span>. The distribution of cases is therefore autocorrelated across districts in Cambodia.</p>
+<p>The Moran’s statistics is here <span class="math inline">\(I =\)</span> 0.16. When comparing its value to the H0 distribution (built under 499 simulations), the probability of observing such a I value under the null hypothesis, i.e.&nbsp;the distribution of cases is spatially independent, is <span class="math inline">\(p_{value} =\)</span> 0.008. We therefore reject H0 with error risk of <span class="math inline">\(\alpha = 5\%\)</span>. The distribution of cases is therefore autocorrelated across districts in Cambodia.</p>
 </section>
 <section id="the-local-morans-i-lisa-test" class="level4" data-number="6.2.2.2">
 <h4 data-number="6.2.2.2" class="anchored" data-anchor-id="the-local-morans-i-lisa-test"><span class="header-section-number">6.2.2.2</span> The Local Moran’s I LISA test</h4>
@@ -629,7 +631,8 @@ Kulldorf test
 <p>While we test the significance of a large number of observation windows, one can raise concern about multiple testing and Type I error. This approach however suggest that we are not interest in a set of signifiant cluster but only in a most-likely cluster. This <strong>a priori</strong> restriction eliminate concern for multpile comparison since the test is simplified to a statistically significance of one single most-likely cluster.</p>
 <p>Because we tested all-possible locations and window-radius, we can also choose to look at secondary clusters. In this case, you should keep in mind that increasing the number of secondary cluster you select, increases the risk for Type I error.</p>
 <div class="cell" data-nm="true">
-<div class="sourceCode cell-code" id="cb14"><pre class="sourceCode r code-with-copy"><code class="sourceCode r"><span id="cb14-1"><a href="#cb14-1" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(<span class="st">"SpatialEpi"</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
+<div class="sourceCode cell-code" id="cb14"><pre class="sourceCode r code-with-copy"><code class="sourceCode r"><span id="cb14-1"><a href="#cb14-1" aria-hidden="true" tabindex="-1"></a><span class="co">#install.packages("SpatialEpi")</span></span>
+<span id="cb14-2"><a href="#cb14-2" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(<span class="st">"SpatialEpi"</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
 </div>
 <p>The use of R spatial object is not implements in <code>kulldorff()</code> function. It uses instead matrix of xy coordinates that represents the centroids of the districts. A given district is included into the observed circular window if its centroids fall into the circle.</p>
 <div class="cell" data-nm="true">
@@ -707,7 +710,7 @@ Kulldorf test
 <span id="cb30-7"><a href="#cb30-7" aria-hidden="true" tabindex="-1"></a><span class="fu">print</span>(df_secondary_clusters)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
 <div class="cell-output cell-output-stdout">
 <pre class="code-out"><code>       SMR number.of.cases expected.cases p.value
-1 3.767698              16       4.246625   0.014</code></pre>
+1 3.767698              16       4.246625   0.008</code></pre>
 </div>
 </div>
 <p>We only have one secondary cluster composed of one district.</p>
diff --git a/public/07-basic_statistics_files/figure-html/LocalMoransI_plt-1.png b/public/07-basic_statistics_files/figure-html/LocalMoransI_plt-1.png
index bcb22f6f4b9785c517f18d28dc13daaffd6be770..4654d2bf6ba6f175849759bf47f7103805d12c06 100644
Binary files a/public/07-basic_statistics_files/figure-html/LocalMoransI_plt-1.png and b/public/07-basic_statistics_files/figure-html/LocalMoransI_plt-1.png differ
diff --git a/public/07-basic_statistics_files/figure-html/MoransI-1.png b/public/07-basic_statistics_files/figure-html/MoransI-1.png
index 768b926e0decce30295066ae3409d20fe673fd72..990ce7f362312ea273f93f8f61522286368843b9 100644
Binary files a/public/07-basic_statistics_files/figure-html/MoransI-1.png and b/public/07-basic_statistics_files/figure-html/MoransI-1.png differ
diff --git a/public/07-basic_statistics_files/figure-html/kd_test-1.png b/public/07-basic_statistics_files/figure-html/kd_test-1.png
index e5054c2623ae5d41a96fb736da8f32296819db7d..98541413f2326e07fa4c167ae9ab14a5b05267ee 100644
Binary files a/public/07-basic_statistics_files/figure-html/kd_test-1.png and b/public/07-basic_statistics_files/figure-html/kd_test-1.png differ
diff --git a/public/search.json b/public/search.json
index 6ce4f115b6d5ed20eef33447b0482d8a73f78cec..e74a1fceb955fd4b15809ec3b40feac6d38020dc 100644
--- a/public/search.json
+++ b/public/search.json
@@ -18,7 +18,7 @@
     "href": "07-basic_statistics.html#cluster-analysis",
     "title": "6  Basic statistics for spatial analysis",
     "section": "6.2 Cluster analysis",
-    "text": "6.2 Cluster analysis\n\n6.2.1 General introduction\nWhy studying clusters in epidemiology? Cluster analysis help identifying unusual patterns that occurs during a given period of time. The underlying ultimate goal of such analysis is to explain the observation of such patterns. In epidemiology, we can distinguish two types of process that would explain heterogeneity in case distribution:\n\nThe 1st order effects are the spatial variations of cases distribution caused by underlying properties of environment or the population structure itself. In such process individual get infected independently from the rest of the population. Such process includes the infection through an environment at risk as, for example, air pollution, contaminated waters or soils and UV exposition. This effect assume that the observed pattern is caused by a difference in risk intensity.\nThe 2nd order effects describes process of spread, contagion and diffusion of diseases caused by interactions between individuals. This includes transmission of infectious disease by proximity, but also the transmission of non-infectious disease, for example, with the diffusion of social norms within networks. This effect assume that the observed pattern is caused by correlations or co-variations.\n\nNo statistical methods could distinguish between these competing processes since their outcome results in similar pattern of points. The cluster analysis help describing the magnitude and the location of pattern but in no way could answer the question of why such patterns occurs. It is therefore a step that help detecting cluster for description and surveillance purpose and rising hypothesis on the underlying process that will lead further investigations.\nKnowledge about the disease and its transmission process could orientate the choice of the methods of study. We presented in this brief tutorial two methods of cluster detection, the Moran’s I test that test for spatial independence (likely related to 2nd order effects) and the scan statistics that test for homogeneous distribution (likely related 1st order effects). It relies on epidemiologist to select the tools that best serve the studied question.\n\n\n\n\n\n\nStatistic tests and distributions\n\n\n\nIn statistics, problems are usually expressed by defining two hypotheses: the null hypothesis (H0), i.e., an a priori hypothesis of the studied phenomenon (e.g., the situation is a random) and the alternative hypothesis (H1), e.g., the situation is not random. The main principle is to measure how likely the observed situation belong to the ensemble of situation that are possible under the H0 hypothesis.\nIn mathematics, a probability distribution is a mathematical expression that represents what we would expect due to random chance. The choice of the probability distribution relies on the type of data you use (continuous, count, binary). In general, three distribution a used while studying disease rates, the Binomial, the Poisson and the Poisson-gamma mixture (also known as negative binomial) distributions.\nMany the statistical tests assume by default that data are normally distributed. It implies that your variable is continuous and that all data could easily be represented by two parameters, the mean and the variance, i.e., each value have the same level of certainty. If many measure can be assessed under the normality assumption, this is usually not the case in epidemiology with strictly positives rates and count values that 1) does not fit the normal distribution and 2) does not provide with the same degree of certainty since variances likely differ between district due to different population size, i.e., some district have very sparse data (with high variance) while other have adequate data (with lower variance).\n\n# dataset statistics\nm_cases <- mean(district$incidence)\nsd_cases <- sd(district$incidence)\n\nhist(district$incidence, probability = TRUE, ylim = c(0, 0.4), xlim = c(-5, 16), xlab = \"Number of cases\", ylab = \"Probability\", main = \"Histogram of observed incidence compared\\nto Normal and Poisson distributions\")\ncurve(dnorm(x, m_cases, sd_cases),col = \"blue\",  lwd = 1, add = TRUE)\npoints(0:max(district$incidence), dpois(0:max(district$incidence), m_cases),type = 'b', pch = 20, col = \"red\", ylim = c(0, 0.6), lty = 2)\n\nlegend(\"topright\", legend = c(\"Normal distribution\", \"Poisson distribution\", \"Observed distribution\"), col = c(\"blue\", \"red\", \"black\"),pch = c(NA, 20, NA), lty = c(1, 2, 1))\n\n\n\n\nIn this tutorial, we used the Poisson distribution in our statistical tests.\n\n\n\n\n6.2.2 Test for spatial autocorrelation (Moran’s I test)\n\n6.2.2.1 The global Moran’s I test\nA popular test for spatial autocorrelation is the Moran’s test. This test tells us whether nearby units tend to exhibit similar incidences. It ranges from -1 to +1. A value of -1 denote that units with low rates are located near other units with high rates, while a Moran’s I value of +1 indicates a concentration of spatial units exhibiting similar rates.\n\n\n\n\n\n\nMoran’s I test\n\n\n\nThe Moran’s statistics is:\n\\[I = \\frac{N}{\\sum_{i=1}^N\\sum_{j=1}^Nw_{ij}}\\frac{\\sum_{i=1}^N\\sum_{j=1}^Nw_{ij}(Y_i-\\bar{Y})(Y_j - \\bar{Y})}{\\sum_{i=1}^N(Y_i-\\bar{Y})^2}\\] with:\n\n\\(N\\): the number of polygons,\n\\(w_{ij}\\): is a matrix of spatial weight with zeroes on the diagonal (i.e., \\(w_{ii}=0\\)). For example, if polygons are neighbors, the weight takes the value \\(1\\) otherwise it takes the value \\(0\\).\n\\(Y_i\\): the variable of interest,\n\\(\\bar{Y}\\): the mean value of \\(Y\\).\n\nUnder the Moran’s test, the statistics hypotheses are:\n\nH0: the distribution of cases is spatially independent, i.e., \\(I=0\\).\nH1: the distribution of cases is spatially autocorrelated, i.e., \\(I\\ne0\\).\n\n\n\nWe will compute the Moran’s statistics using spdep(R. Bivand et al. 2015) and Dcluster(Gómez-Rubio et al. 2015) packages. spdep package provides a collection of functions to analyze spatial correlations of polygons and works with sp objects. In this example, we use poly2nb() and nb2listw(). These functions respectively detect the neighboring polygons and assign weight corresponding to \\(1/\\#\\ of\\ neighbors\\). Dcluster package provides a set of functions for the detection of spatial clusters of disease using count data.\n\nlibrary(spdep) # Functions for creating spatial weight, spatial analysis\nlibrary(DCluster)  # Package with functions for spatial cluster analysis\n\nqueen_nb <- poly2nb(district) # Neighbors according to queen case\nq_listw <- nb2listw(queen_nb, style = 'W') # row-standardized weights\n\n# Moran's I test\nm_test <- moranI.test(cases ~ offset(log(expected)), \n                  data = district,\n                  model = 'poisson',\n                  R = 499,\n                  listw = q_listw,\n                  n = length(district$cases), # number of regions\n                  S0 = Szero(q_listw)) # Global sum of weights\nprint(m_test)\n\nMoran's I test of spatial autocorrelation \n\n    Type of boots.: parametric \n    Model used when sampling: Poisson \n    Number of simulations: 499 \n    Statistic:  0.1566449 \n    p-value :  0.014 \n\nplot(m_test)\n\n\n\n\nThe Moran’s statistics is here \\(I =\\) 0.16. When comparing its value to the H0 distribution (built under 499 simulations), the probability of observing such a I value under the null hypothesis, i.e. the distribution of cases is spatially independent, is \\(p_{value} =\\) 0.014. We therefore reject H0 with error risk of \\(\\alpha = 5\\%\\). The distribution of cases is therefore autocorrelated across districts in Cambodia.\n\n\n6.2.2.2 The Local Moran’s I LISA test\nThe global Moran’s test provides us a global statistical value informing whether autocorrelation occurs over the territory but does not inform on where does these correlations occurs, i.e., what is the locations of the clusters. To identify such cluster, we can decompose the Moran’s I statistic to extract local information of the level of correlation of each district and its neighbors. This is called the Local Moran’s I LISA statistic. Because the Local Moran’s I LISA statistic test each district for autocorrelation independently, concern is raised about multiple testing limitations that increase the Type I error (\\(\\alpha\\)) of the statistical tests. The use of local test should therefore be study in light of explore and describes clusters once the global test has detected autocorrelation.\n\n\n\n\n\n\nStatistical test\n\n\n\nFor each district \\(i\\), the Local Moran’s I statistics is:\n\\[I_i = \\frac{(Y_i-\\bar{Y})}{\\sum_{i=1}^N(Y_i-\\bar{Y})^2}\\sum_{j=1}^Nw_{ij}(Y_j - \\bar{Y}) \\text{ with }  I = \\sum_{i=1}^NI_i/N\\]\n\n\nThe localmoran()function from the package spdep treats the variable of interest as if it was normally distributed. In some cases, this assumption could be reasonable for incidence rate, especially when the areal units of analysis have sufficiently large population count suggesting that the values have similar level of variances. Unfortunately, the local Moran’s test has not been implemented for Poisson distribution (population not large enough in some districts) in spdep package. However, Bivand et al. (R. S. Bivand et al. 2008) provided some code to manually perform the analysis using Poisson distribution and this code was further implemented in the course “Spatial Epidemiology”.\n\n# Step 1 - Create the standardized deviation of observed from expected\nsd_lm <- (district$cases - district$expected) / sqrt(district$expected)\n\n# Step 2 - Create a spatially lagged version of standardized deviation of neighbors\nwsd_lm <- lag.listw(q_listw, sd_lm)\n\n# Step 3 - the local Moran's I is the product of step 1 and step 2\ndistrict$I_lm <- sd_lm * wsd_lm\n\n# Step 4 - setup parameters for simulation of the null distribution\n\n# Specify number of simulations to run\nnsim <- 499\n\n# Specify dimensions of result based on number of regions\nN <- length(district$expected)\n\n# Create a matrix of zeros to hold results, with a row for each county, and a column for each simulation\nsims <- matrix(0, ncol = nsim, nrow = N)\n\n# Step 5 - Start a for-loop to iterate over simulation columns\nfor(i in 1:nsim){\n  y <- rpois(N, lambda = district$expected) # generate a random event count, given expected\n  sd_lmi <- (y - district$expected) / sqrt(district$expected) # standardized local measure\n  wsd_lmi <- lag.listw(q_listw, sd_lmi) # standardized spatially lagged measure\n  sims[, i] <- sd_lmi * wsd_lmi # this is the I(i) statistic under this iteration of null\n}\n\n# Step 6 - For each county, test where the observed value ranks with respect to the null simulations\nxrank <- apply(cbind(district$I_lm, sims), 1, function(x) rank(x)[1])\n\n# Step 7 - Calculate the difference between observed rank and total possible (nsim)\ndiff <- nsim - xrank\ndiff <- ifelse(diff > 0, diff, 0)\n\n# Step 8 - Assuming a uniform distribution of ranks, calculate p-value for observed\n# given the null distribution generate from simulations\ndistrict$pval_lm <- punif((diff + 1) / (nsim + 1))\n\nBriefly, the process consist on 1) computing the I statistics for the observed data, 2) estimating the null distribution of the I statistics by performing random sampling into a poisson distribution and 3) comparing the observed I statistic with the null distribution to determine the probability to observe such value if the number of cases were spatially independent. For each district, we obtain a p-value based on the comparison of the observed value and the null distribution.\nA conventional way of plotting these results is to classify the districts into 5 classes based on local Moran’s I output. The classification of cluster that are significantly autocorrelated to their neighbors is performed based on a comparison of the scaled incidence in the district compared to the scaled weighted averaged incidence of it neighboring districts (computed with lag.listw()):\n\nDistricts that have higher-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local \\(I_i\\) statistic are defined as High-High (hotspot of the disease)\nDistricts that have lower-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local \\(I_i\\) statistic are defined as Low-Low (cold spot of the disease).\nDistricts that have higher-than-average rates in the index regions and lower-than-average rates in their neighbors, and showing statistically significant negative values for the local \\(I_i\\) statistic are defined as High-Low(outlier with high incidence in an area with low incidence).\nDistricts that have lower-than-average rates in the index regions and higher-than-average rates in their neighbors, and showing statistically significant negative values for the local \\(I_i\\) statistic are defined as Low-High (outlier of low incidence in area with high incidence).\nDistricts with non-significant values for the \\(I_i\\) statistic are defined as Non-significant.\n\n\n# create lagged local raw_rate - in other words the average of the queen neighbors value\n# values are scaled (centered and reduced) to be compared to average\ndistrict$lag_std   <- scale(lag.listw(q_listw, var = district$incidence))\ndistrict$incidence_std <- scale(district$incidence)\n\n# extract pvalues\n# district$lm_pv <- lm_test[,5]\n\n# Classify local moran's outputs\ndistrict$lm_class <- NA\ndistrict$lm_class[district$incidence_std >=0 & district$lag_std >=0] <- 'High-High'\ndistrict$lm_class[district$incidence_std <=0 & district$lag_std <=0] <- 'Low-Low'\ndistrict$lm_class[district$incidence_std <=0 & district$lag_std >=0] <- 'Low-High'\ndistrict$lm_class[district$incidence_std >=0 & district$lag_std <=0] <- 'High-Low'\ndistrict$lm_class[district$pval_lm >= 0.05] <- 'Non-significant'\n\ndistrict$lm_class <- factor(district$lm_class, levels=c(\"High-High\", \"Low-Low\", \"High-Low\",  \"Low-High\", \"Non-significant\") )\n\n# create map\nmf_map(x = district,\n       var = \"lm_class\",\n       type = \"typo\",\n       cex = 2,\n       col_na = \"white\",\n       #val_order = c(\"High-High\", \"Low-Low\", \"High-Low\",  \"Low-High\", \"Non-significant\") ,\n       pal = c(\"#6D0026\" , \"blue\",  \"white\") , # \"#FF755F\",\"#7FABD3\" ,\n       leg_title = \"Clusters\")\n\nmf_layout(title = \"Cluster using Local Moran's I statistic\")\n\n\n\n\n\n\n\n6.2.3 Spatial scan statistics\nWhile Moran’s indices focus on testing for autocorrelation between neighboring polygons (under the null assumption of spatial independence), the spatial scan statistic aims at identifying an abnormal higher risk in a given region compared to the risk outside of this region (under the null assumption of homogeneous distribution). The conception of a cluster is therefore different between the two methods.\nThe function kulldorff from the package SpatialEpi (Kim and Wakefield 2010) is a simple tool to implement spatial-only scan statistics.\n\n\n\n\n\n\nKulldorf test\n\n\n\nUnder the kulldorff test, the statistics hypotheses are:\n\nH0: the risk is constant over the area, i.e., there is a spatial homogeneity of the incidence.\nH1: a particular window have higher incidence than the rest of the area , i.e., there is a spatial heterogeneity of incidence.\n\n\n\nBriefly, the kulldorff scan statistics scan the area for clusters using several steps:\n\nIt create a circular window of observation by defining a single location and an associated radius of the windows varying from 0 to a large number that depends on population distribution (largest radius could include 50% of the population).\nIt aggregates the count of events and the population at risk (or an expected count of events) inside and outside the window of observation.\nFinally, it computes the likelihood ratio and test whether the risk is equal inside versus outside the windows (H0) or greater inside the observed window (H1). The H0 distribution is estimated by simulating the distribution of counts under the null hypothesis (homogeneous risk).\nThese 3 steps are repeated for each location and each possible windows-radii.\n\nWhile we test the significance of a large number of observation windows, one can raise concern about multiple testing and Type I error. This approach however suggest that we are not interest in a set of signifiant cluster but only in a most-likely cluster. This a priori restriction eliminate concern for multpile comparison since the test is simplified to a statistically significance of one single most-likely cluster.\nBecause we tested all-possible locations and window-radius, we can also choose to look at secondary clusters. In this case, you should keep in mind that increasing the number of secondary cluster you select, increases the risk for Type I error.\n\nlibrary(\"SpatialEpi\")\n\nThe use of R spatial object is not implements in kulldorff() function. It uses instead matrix of xy coordinates that represents the centroids of the districts. A given district is included into the observed circular window if its centroids fall into the circle.\n\ndistrict_xy <- st_centroid(district) %>% \n  st_coordinates()\n\nhead(district_xy)\n\n         X       Y\n1 330823.3 1464560\n2 749758.3 1541787\n3 468384.0 1277007\n4 494548.2 1215261\n5 459644.2 1194615\n6 360528.3 1516339\n\n\nWe can then call kulldorff function (you are strongly encouraged to call ?kulldorff to properly call the function). The alpha.level threshold filter for the secondary clusters that will be retained. The most-likely cluster will be saved whatever its significance.\n\nkd_Wfever <- kulldorff(district_xy, \n                cases = district$cases,\n                population = district$T_POP,\n                expected.cases = district$expected,\n                pop.upper.bound = 0.5, # include maximum 50% of the population in a windows\n                n.simulations = 499,\n                alpha.level = 0.2)\n\n\n\n\nThe function plot the histogram of the distribution of log-likelihood ratio simulated under the null hypothesis that is estimated based on Monte Carlo simulations. The observed value of the most significant cluster identified from all possible scans is compared to the distribution to determine significance. All outputs are saved into an R object, here called kd_Wfever. Unfortunately, the package did not develop any summary and visualization of the results but we can explore the output object.\n\nnames(kd_Wfever)\n\n[1] \"most.likely.cluster\" \"secondary.clusters\"  \"type\"               \n[4] \"log.lkhd\"            \"simulated.log.lkhd\" \n\n\nFirst, we can focus on the most likely cluster and explore its characteristics.\n\n# We can see which districts (r number) belong to this cluster\nkd_Wfever$most.likely.cluster$location.IDs.included\n\n [1]  48  93  66 180 133  29 194 118  50 144  31 141   3 117  22  43 142\n\n# standardized incidence ratio\nkd_Wfever$most.likely.cluster$SMR\n\n[1] 2.303106\n\n# number of observed and expected cases in this cluster\nkd_Wfever$most.likely.cluster$number.of.cases\n\n[1] 122\n\nkd_Wfever$most.likely.cluster$expected.cases\n\n[1] 52.97195\n\n\n17 districts belong to the cluster and its number of cases is 2.3 times higher than the expected number of cases.\nSimilarly, we could study the secondary clusters. Results are saved in a list.\n\n# We can see which districts (r number) belong to this cluster\nlength(kd_Wfever$secondary.clusters)\n\n[1] 1\n\n# retrieve data for all secondary clusters into a table\ndf_secondary_clusters <- data.frame(SMR = sapply(kd_Wfever$secondary.clusters, '[[', 5),  \n                          number.of.cases = sapply(kd_Wfever$secondary.clusters, '[[', 3),\n                          expected.cases = sapply(kd_Wfever$secondary.clusters, '[[', 4),\n                          p.value = sapply(kd_Wfever$secondary.clusters, '[[', 8))\n\nprint(df_secondary_clusters)\n\n       SMR number.of.cases expected.cases p.value\n1 3.767698              16       4.246625   0.014\n\n\nWe only have one secondary cluster composed of one district.\n\n# create empty column to store cluster informations\ndistrict$k_cluster <- NA\n\n# save cluster information from kulldorff outputs\ndistrict$k_cluster[kd_Wfever$most.likely.cluster$location.IDs.included] <- 'Most likely cluster'\n\nfor(i in 1:length(kd_Wfever$secondary.clusters)){\ndistrict$k_cluster[kd_Wfever$secondary.clusters[[i]]$location.IDs.included] <- paste(\n  'Secondary cluster', i, sep = '')\n}\n\n#district$k_cluster[is.na(district$k_cluster)] <- \"No cluster\"\n\n\n# create map\nmf_map(x = district,\n       var = \"k_cluster\",\n       type = \"typo\",\n       cex = 2,\n       col_na = \"white\",\n       pal = mf_get_pal(palette = \"Reds\", n = 3)[1:2],\n       leg_title = \"Clusters\")\n\nmf_layout(title = \"Cluster using kulldorf scan statistic\")\n\n\n\n\n\n\n\n\n\n\nTo go further …\n\n\n\nIn this example, the expected number of cases was defined using the population count but note that standardization over other variables as age could also be implemented with the strata parameter in the kulldorff() function.\nIn addition, this cluster analysis was performed solely using the spatial scan but you should keep in mind that this method of cluster detection can be implemented for spatio-temporal data as well where the cluster definition is an abnormal number of cases in a delimited spatial area and during a given period of time. The windows of observation are therefore defined for a different center, radius and time-period. You should look at the function scan_ep_poisson() function in the package scanstatistic (Allévius 2018) for this analysis.\n\n\n\n\n\n\nAllévius, Benjamin. 2018. “Scanstatistics: Space-Time Anomaly Detection Using Scan Statistics.” Journal of Open Source Software 3 (25): 515.\n\n\nBivand, Roger S, Edzer J Pebesma, Virgilio Gómez-Rubio, and Edzer Jan Pebesma. 2008. Applied Spatial Data Analysis with r. Vol. 747248717. Springer.\n\n\nBivand, Roger, Micah Altman, Luc Anselin, Renato Assunção, Olaf Berke, Andrew Bernat, and Guillaume Blanchet. 2015. “Package ‘Spdep’.” The Comprehensive R Archive Network.\n\n\nGómez-Rubio, Virgilio, Juan Ferrándiz-Ferragud, Antonio López-Quı́lez, et al. 2015. “Package ‘DCluster’.”\n\n\nKim, Albert Y, and Jon Wakefield. 2010. “R Data and Methods for Spatial Epidemiology: The SpatialEpi Package.” Dept of Statistics, University of Washington."
+    "text": "6.2 Cluster analysis\n\n6.2.1 General introduction\nWhy studying clusters in epidemiology? Cluster analysis help identifying unusual patterns that occurs during a given period of time. The underlying ultimate goal of such analysis is to explain the observation of such patterns. In epidemiology, we can distinguish two types of process that would explain heterogeneity in case distribution:\n\nThe 1st order effects are the spatial variations of cases distribution caused by underlying properties of environment or the population structure itself. In such process individual get infected independently from the rest of the population. Such process includes the infection through an environment at risk as, for example, air pollution, contaminated waters or soils and UV exposition. This effect assume that the observed pattern is caused by a difference in risk intensity.\nThe 2nd order effects describes process of spread, contagion and diffusion of diseases caused by interactions between individuals. This includes transmission of infectious disease by proximity, but also the transmission of non-infectious disease, for example, with the diffusion of social norms within networks. This effect assume that the observed pattern is caused by correlations or co-variations.\n\nNo statistical methods could distinguish between these competing processes since their outcome results in similar pattern of points. The cluster analysis help describing the magnitude and the location of pattern but in no way could answer the question of why such patterns occurs. It is therefore a step that help detecting cluster for description and surveillance purpose and rising hypothesis on the underlying process that will lead further investigations.\nKnowledge about the disease and its transmission process could orientate the choice of the methods of study. We presented in this brief tutorial two methods of cluster detection, the Moran’s I test that test for spatial independence (likely related to 2nd order effects) and the scan statistics that test for homogeneous distribution (likely related 1st order effects). It relies on epidemiologist to select the tools that best serve the studied question.\n\n\n\n\n\n\nStatistic tests and distributions\n\n\n\nIn statistics, problems are usually expressed by defining two hypotheses: the null hypothesis (H0), i.e., an a priori hypothesis of the studied phenomenon (e.g., the situation is a random) and the alternative hypothesis (H1), e.g., the situation is not random. The main principle is to measure how likely the observed situation belong to the ensemble of situation that are possible under the H0 hypothesis.\nIn mathematics, a probability distribution is a mathematical expression that represents what we would expect due to random chance. The choice of the probability distribution relies on the type of data you use (continuous, count, binary). In general, three distribution a used while studying disease rates, the Binomial, the Poisson and the Poisson-gamma mixture (also known as negative binomial) distributions.\nMany the statistical tests assume by default that data are normally distributed. It implies that your variable is continuous and that all data could easily be represented by two parameters, the mean and the variance, i.e., each value have the same level of certainty. If many measure can be assessed under the normality assumption, this is usually not the case in epidemiology with strictly positives rates and count values that 1) does not fit the normal distribution and 2) does not provide with the same degree of certainty since variances likely differ between district due to different population size, i.e., some district have very sparse data (with high variance) while other have adequate data (with lower variance).\n\n# dataset statistics\nm_cases <- mean(district$incidence)\nsd_cases <- sd(district$incidence)\n\nhist(district$incidence, probability = TRUE, ylim = c(0, 0.4), xlim = c(-5, 16), xlab = \"Number of cases\", ylab = \"Probability\", main = \"Histogram of observed incidence compared\\nto Normal and Poisson distributions\")\ncurve(dnorm(x, m_cases, sd_cases),col = \"blue\",  lwd = 1, add = TRUE)\npoints(0:max(district$incidence), dpois(0:max(district$incidence), m_cases),type = 'b', pch = 20, col = \"red\", ylim = c(0, 0.6), lty = 2)\n\nlegend(\"topright\", legend = c(\"Normal distribution\", \"Poisson distribution\", \"Observed distribution\"), col = c(\"blue\", \"red\", \"black\"),pch = c(NA, 20, NA), lty = c(1, 2, 1))\n\n\n\n\nIn this tutorial, we used the Poisson distribution in our statistical tests.\n\n\n\n\n6.2.2 Test for spatial autocorrelation (Moran’s I test)\n\n6.2.2.1 The global Moran’s I test\nA popular test for spatial autocorrelation is the Moran’s test. This test tells us whether nearby units tend to exhibit similar incidences. It ranges from -1 to +1. A value of -1 denote that units with low rates are located near other units with high rates, while a Moran’s I value of +1 indicates a concentration of spatial units exhibiting similar rates.\n\n\n\n\n\n\nMoran’s I test\n\n\n\nThe Moran’s statistics is:\n\\[I = \\frac{N}{\\sum_{i=1}^N\\sum_{j=1}^Nw_{ij}}\\frac{\\sum_{i=1}^N\\sum_{j=1}^Nw_{ij}(Y_i-\\bar{Y})(Y_j - \\bar{Y})}{\\sum_{i=1}^N(Y_i-\\bar{Y})^2}\\] with:\n\n\\(N\\): the number of polygons,\n\\(w_{ij}\\): is a matrix of spatial weight with zeroes on the diagonal (i.e., \\(w_{ii}=0\\)). For example, if polygons are neighbors, the weight takes the value \\(1\\) otherwise it takes the value \\(0\\).\n\\(Y_i\\): the variable of interest,\n\\(\\bar{Y}\\): the mean value of \\(Y\\).\n\nUnder the Moran’s test, the statistics hypotheses are:\n\nH0: the distribution of cases is spatially independent, i.e., \\(I=0\\).\nH1: the distribution of cases is spatially autocorrelated, i.e., \\(I\\ne0\\).\n\n\n\nWe will compute the Moran’s statistics using spdep(R. Bivand et al. 2015) and Dcluster(Gómez-Rubio et al. 2015) packages. spdep package provides a collection of functions to analyze spatial correlations of polygons and works with sp objects. In this example, we use poly2nb() and nb2listw(). These functions respectively detect the neighboring polygons and assign weight corresponding to \\(1/\\#\\ of\\ neighbors\\). Dcluster package provides a set of functions for the detection of spatial clusters of disease using count data.\n\n#install.packages(\"spdep\")\n#install.packages(\"DCluster\")\nlibrary(spdep) # Functions for creating spatial weight, spatial analysis\nlibrary(DCluster)  # Package with functions for spatial cluster analysis\n\nqueen_nb <- poly2nb(district) # Neighbors according to queen case\nq_listw <- nb2listw(queen_nb, style = 'W') # row-standardized weights\n\n# Moran's I test\nm_test <- moranI.test(cases ~ offset(log(expected)), \n                  data = district,\n                  model = 'poisson',\n                  R = 499,\n                  listw = q_listw,\n                  n = length(district$cases), # number of regions\n                  S0 = Szero(q_listw)) # Global sum of weights\nprint(m_test)\n\nMoran's I test of spatial autocorrelation \n\n    Type of boots.: parametric \n    Model used when sampling: Poisson \n    Number of simulations: 499 \n    Statistic:  0.1566449 \n    p-value :  0.008 \n\nplot(m_test)\n\n\n\n\nThe Moran’s statistics is here \\(I =\\) 0.16. When comparing its value to the H0 distribution (built under 499 simulations), the probability of observing such a I value under the null hypothesis, i.e. the distribution of cases is spatially independent, is \\(p_{value} =\\) 0.008. We therefore reject H0 with error risk of \\(\\alpha = 5\\%\\). The distribution of cases is therefore autocorrelated across districts in Cambodia.\n\n\n6.2.2.2 The Local Moran’s I LISA test\nThe global Moran’s test provides us a global statistical value informing whether autocorrelation occurs over the territory but does not inform on where does these correlations occurs, i.e., what is the locations of the clusters. To identify such cluster, we can decompose the Moran’s I statistic to extract local information of the level of correlation of each district and its neighbors. This is called the Local Moran’s I LISA statistic. Because the Local Moran’s I LISA statistic test each district for autocorrelation independently, concern is raised about multiple testing limitations that increase the Type I error (\\(\\alpha\\)) of the statistical tests. The use of local test should therefore be study in light of explore and describes clusters once the global test has detected autocorrelation.\n\n\n\n\n\n\nStatistical test\n\n\n\nFor each district \\(i\\), the Local Moran’s I statistics is:\n\\[I_i = \\frac{(Y_i-\\bar{Y})}{\\sum_{i=1}^N(Y_i-\\bar{Y})^2}\\sum_{j=1}^Nw_{ij}(Y_j - \\bar{Y}) \\text{ with }  I = \\sum_{i=1}^NI_i/N\\]\n\n\nThe localmoran()function from the package spdep treats the variable of interest as if it was normally distributed. In some cases, this assumption could be reasonable for incidence rate, especially when the areal units of analysis have sufficiently large population count suggesting that the values have similar level of variances. Unfortunately, the local Moran’s test has not been implemented for Poisson distribution (population not large enough in some districts) in spdep package. However, Bivand et al. (R. S. Bivand et al. 2008) provided some code to manually perform the analysis using Poisson distribution and this code was further implemented in the course “Spatial Epidemiology”.\n\n# Step 1 - Create the standardized deviation of observed from expected\nsd_lm <- (district$cases - district$expected) / sqrt(district$expected)\n\n# Step 2 - Create a spatially lagged version of standardized deviation of neighbors\nwsd_lm <- lag.listw(q_listw, sd_lm)\n\n# Step 3 - the local Moran's I is the product of step 1 and step 2\ndistrict$I_lm <- sd_lm * wsd_lm\n\n# Step 4 - setup parameters for simulation of the null distribution\n\n# Specify number of simulations to run\nnsim <- 499\n\n# Specify dimensions of result based on number of regions\nN <- length(district$expected)\n\n# Create a matrix of zeros to hold results, with a row for each county, and a column for each simulation\nsims <- matrix(0, ncol = nsim, nrow = N)\n\n# Step 5 - Start a for-loop to iterate over simulation columns\nfor(i in 1:nsim){\n  y <- rpois(N, lambda = district$expected) # generate a random event count, given expected\n  sd_lmi <- (y - district$expected) / sqrt(district$expected) # standardized local measure\n  wsd_lmi <- lag.listw(q_listw, sd_lmi) # standardized spatially lagged measure\n  sims[, i] <- sd_lmi * wsd_lmi # this is the I(i) statistic under this iteration of null\n}\n\n# Step 6 - For each county, test where the observed value ranks with respect to the null simulations\nxrank <- apply(cbind(district$I_lm, sims), 1, function(x) rank(x)[1])\n\n# Step 7 - Calculate the difference between observed rank and total possible (nsim)\ndiff <- nsim - xrank\ndiff <- ifelse(diff > 0, diff, 0)\n\n# Step 8 - Assuming a uniform distribution of ranks, calculate p-value for observed\n# given the null distribution generate from simulations\ndistrict$pval_lm <- punif((diff + 1) / (nsim + 1))\n\nBriefly, the process consist on 1) computing the I statistics for the observed data, 2) estimating the null distribution of the I statistics by performing random sampling into a poisson distribution and 3) comparing the observed I statistic with the null distribution to determine the probability to observe such value if the number of cases were spatially independent. For each district, we obtain a p-value based on the comparison of the observed value and the null distribution.\nA conventional way of plotting these results is to classify the districts into 5 classes based on local Moran’s I output. The classification of cluster that are significantly autocorrelated to their neighbors is performed based on a comparison of the scaled incidence in the district compared to the scaled weighted averaged incidence of it neighboring districts (computed with lag.listw()):\n\nDistricts that have higher-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local \\(I_i\\) statistic are defined as High-High (hotspot of the disease)\nDistricts that have lower-than-average rates in both index regions and their neighbors and showing statistically significant positive values for the local \\(I_i\\) statistic are defined as Low-Low (cold spot of the disease).\nDistricts that have higher-than-average rates in the index regions and lower-than-average rates in their neighbors, and showing statistically significant negative values for the local \\(I_i\\) statistic are defined as High-Low(outlier with high incidence in an area with low incidence).\nDistricts that have lower-than-average rates in the index regions and higher-than-average rates in their neighbors, and showing statistically significant negative values for the local \\(I_i\\) statistic are defined as Low-High (outlier of low incidence in area with high incidence).\nDistricts with non-significant values for the \\(I_i\\) statistic are defined as Non-significant.\n\n\n# create lagged local raw_rate - in other words the average of the queen neighbors value\n# values are scaled (centered and reduced) to be compared to average\ndistrict$lag_std   <- scale(lag.listw(q_listw, var = district$incidence))\ndistrict$incidence_std <- scale(district$incidence)\n\n# extract pvalues\n# district$lm_pv <- lm_test[,5]\n\n# Classify local moran's outputs\ndistrict$lm_class <- NA\ndistrict$lm_class[district$incidence_std >=0 & district$lag_std >=0] <- 'High-High'\ndistrict$lm_class[district$incidence_std <=0 & district$lag_std <=0] <- 'Low-Low'\ndistrict$lm_class[district$incidence_std <=0 & district$lag_std >=0] <- 'Low-High'\ndistrict$lm_class[district$incidence_std >=0 & district$lag_std <=0] <- 'High-Low'\ndistrict$lm_class[district$pval_lm >= 0.05] <- 'Non-significant'\n\ndistrict$lm_class <- factor(district$lm_class, levels=c(\"High-High\", \"Low-Low\", \"High-Low\",  \"Low-High\", \"Non-significant\") )\n\n# create map\nmf_map(x = district,\n       var = \"lm_class\",\n       type = \"typo\",\n       cex = 2,\n       col_na = \"white\",\n       #val_order = c(\"High-High\", \"Low-Low\", \"High-Low\",  \"Low-High\", \"Non-significant\") ,\n       pal = c(\"#6D0026\" , \"blue\",  \"white\") , # \"#FF755F\",\"#7FABD3\" ,\n       leg_title = \"Clusters\")\n\nmf_layout(title = \"Cluster using Local Moran's I statistic\")\n\n\n\n\n\n\n\n6.2.3 Spatial scan statistics\nWhile Moran’s indices focus on testing for autocorrelation between neighboring polygons (under the null assumption of spatial independence), the spatial scan statistic aims at identifying an abnormal higher risk in a given region compared to the risk outside of this region (under the null assumption of homogeneous distribution). The conception of a cluster is therefore different between the two methods.\nThe function kulldorff from the package SpatialEpi (Kim and Wakefield 2010) is a simple tool to implement spatial-only scan statistics.\n\n\n\n\n\n\nKulldorf test\n\n\n\nUnder the kulldorff test, the statistics hypotheses are:\n\nH0: the risk is constant over the area, i.e., there is a spatial homogeneity of the incidence.\nH1: a particular window have higher incidence than the rest of the area , i.e., there is a spatial heterogeneity of incidence.\n\n\n\nBriefly, the kulldorff scan statistics scan the area for clusters using several steps:\n\nIt create a circular window of observation by defining a single location and an associated radius of the windows varying from 0 to a large number that depends on population distribution (largest radius could include 50% of the population).\nIt aggregates the count of events and the population at risk (or an expected count of events) inside and outside the window of observation.\nFinally, it computes the likelihood ratio and test whether the risk is equal inside versus outside the windows (H0) or greater inside the observed window (H1). The H0 distribution is estimated by simulating the distribution of counts under the null hypothesis (homogeneous risk).\nThese 3 steps are repeated for each location and each possible windows-radii.\n\nWhile we test the significance of a large number of observation windows, one can raise concern about multiple testing and Type I error. This approach however suggest that we are not interest in a set of signifiant cluster but only in a most-likely cluster. This a priori restriction eliminate concern for multpile comparison since the test is simplified to a statistically significance of one single most-likely cluster.\nBecause we tested all-possible locations and window-radius, we can also choose to look at secondary clusters. In this case, you should keep in mind that increasing the number of secondary cluster you select, increases the risk for Type I error.\n\n#install.packages(\"SpatialEpi\")\nlibrary(\"SpatialEpi\")\n\nThe use of R spatial object is not implements in kulldorff() function. It uses instead matrix of xy coordinates that represents the centroids of the districts. A given district is included into the observed circular window if its centroids fall into the circle.\n\ndistrict_xy <- st_centroid(district) %>% \n  st_coordinates()\n\nhead(district_xy)\n\n         X       Y\n1 330823.3 1464560\n2 749758.3 1541787\n3 468384.0 1277007\n4 494548.2 1215261\n5 459644.2 1194615\n6 360528.3 1516339\n\n\nWe can then call kulldorff function (you are strongly encouraged to call ?kulldorff to properly call the function). The alpha.level threshold filter for the secondary clusters that will be retained. The most-likely cluster will be saved whatever its significance.\n\nkd_Wfever <- kulldorff(district_xy, \n                cases = district$cases,\n                population = district$T_POP,\n                expected.cases = district$expected,\n                pop.upper.bound = 0.5, # include maximum 50% of the population in a windows\n                n.simulations = 499,\n                alpha.level = 0.2)\n\n\n\n\nThe function plot the histogram of the distribution of log-likelihood ratio simulated under the null hypothesis that is estimated based on Monte Carlo simulations. The observed value of the most significant cluster identified from all possible scans is compared to the distribution to determine significance. All outputs are saved into an R object, here called kd_Wfever. Unfortunately, the package did not develop any summary and visualization of the results but we can explore the output object.\n\nnames(kd_Wfever)\n\n[1] \"most.likely.cluster\" \"secondary.clusters\"  \"type\"               \n[4] \"log.lkhd\"            \"simulated.log.lkhd\" \n\n\nFirst, we can focus on the most likely cluster and explore its characteristics.\n\n# We can see which districts (r number) belong to this cluster\nkd_Wfever$most.likely.cluster$location.IDs.included\n\n [1]  48  93  66 180 133  29 194 118  50 144  31 141   3 117  22  43 142\n\n# standardized incidence ratio\nkd_Wfever$most.likely.cluster$SMR\n\n[1] 2.303106\n\n# number of observed and expected cases in this cluster\nkd_Wfever$most.likely.cluster$number.of.cases\n\n[1] 122\n\nkd_Wfever$most.likely.cluster$expected.cases\n\n[1] 52.97195\n\n\n17 districts belong to the cluster and its number of cases is 2.3 times higher than the expected number of cases.\nSimilarly, we could study the secondary clusters. Results are saved in a list.\n\n# We can see which districts (r number) belong to this cluster\nlength(kd_Wfever$secondary.clusters)\n\n[1] 1\n\n# retrieve data for all secondary clusters into a table\ndf_secondary_clusters <- data.frame(SMR = sapply(kd_Wfever$secondary.clusters, '[[', 5),  \n                          number.of.cases = sapply(kd_Wfever$secondary.clusters, '[[', 3),\n                          expected.cases = sapply(kd_Wfever$secondary.clusters, '[[', 4),\n                          p.value = sapply(kd_Wfever$secondary.clusters, '[[', 8))\n\nprint(df_secondary_clusters)\n\n       SMR number.of.cases expected.cases p.value\n1 3.767698              16       4.246625   0.008\n\n\nWe only have one secondary cluster composed of one district.\n\n# create empty column to store cluster informations\ndistrict$k_cluster <- NA\n\n# save cluster information from kulldorff outputs\ndistrict$k_cluster[kd_Wfever$most.likely.cluster$location.IDs.included] <- 'Most likely cluster'\n\nfor(i in 1:length(kd_Wfever$secondary.clusters)){\ndistrict$k_cluster[kd_Wfever$secondary.clusters[[i]]$location.IDs.included] <- paste(\n  'Secondary cluster', i, sep = '')\n}\n\n#district$k_cluster[is.na(district$k_cluster)] <- \"No cluster\"\n\n\n# create map\nmf_map(x = district,\n       var = \"k_cluster\",\n       type = \"typo\",\n       cex = 2,\n       col_na = \"white\",\n       pal = mf_get_pal(palette = \"Reds\", n = 3)[1:2],\n       leg_title = \"Clusters\")\n\nmf_layout(title = \"Cluster using kulldorf scan statistic\")\n\n\n\n\n\n\n\n\n\n\nTo go further …\n\n\n\nIn this example, the expected number of cases was defined using the population count but note that standardization over other variables as age could also be implemented with the strata parameter in the kulldorff() function.\nIn addition, this cluster analysis was performed solely using the spatial scan but you should keep in mind that this method of cluster detection can be implemented for spatio-temporal data as well where the cluster definition is an abnormal number of cases in a delimited spatial area and during a given period of time. The windows of observation are therefore defined for a different center, radius and time-period. You should look at the function scan_ep_poisson() function in the package scanstatistic (Allévius 2018) for this analysis.\n\n\n\n\n\n\nAllévius, Benjamin. 2018. “Scanstatistics: Space-Time Anomaly Detection Using Scan Statistics.” Journal of Open Source Software 3 (25): 515.\n\n\nBivand, Roger S, Edzer J Pebesma, Virgilio Gómez-Rubio, and Edzer Jan Pebesma. 2008. Applied Spatial Data Analysis with r. Vol. 747248717. Springer.\n\n\nBivand, Roger, Micah Altman, Luc Anselin, Renato Assunção, Olaf Berke, Andrew Bernat, and Guillaume Blanchet. 2015. “Package ‘Spdep’.” The Comprehensive R Archive Network.\n\n\nGómez-Rubio, Virgilio, Juan Ferrándiz-Ferragud, Antonio López-Quı́lez, et al. 2015. “Package ‘DCluster’.”\n\n\nKim, Albert Y, and Jon Wakefield. 2010. “R Data and Methods for Spatial Epidemiology: The SpatialEpi Package.” Dept of Statistics, University of Washington."
   },
   {
     "objectID": "01-introduction.html",