Granger causality (GC) stands as a powerful causal inference tool in time
series analysis. Typically estimated from time series data with finite sampling rate, the
GC value inherently depends on the sampling interval $\tau.$ Intuitively, a higher data
sampling rate leads to a time series that better approximates the real signal. However,
previous studies have shown that the bivariate GC converges to zero linearly as $\tau$ approaches zero, which will lead to mis-inference of causality due to vanishing GC value
even in the presence of causality. In this work, by performing mathematical analysis,
we show this asymptotic behavior remains valid in the case of conditional GC when
applying to a system composed of more than two variables. We validate the analytical result by computing GC value with multiple sampling rates for the simulated data
of Hodgkin-Huxley neuronal networks and the experimental data of intracranial EEG
signals. Our result demonstrates the hazard of GC inference with high sampling rate,
and we propose an accurate inference approach by calculating the ratio of GC to $\tau$ as $\tau$ approaches zero.