A bridge for professionals without strong math backgrounds. Probability distributions, hypothesis testing, regression, and VaR — explained from first principles with connections to how they appear on the FRM exam.
Updated for the 2026 FRM curriculum • Quantitative Analysis = 20% of Part 1
If you're coming to the FRM from a non-quantitative background — operations, compliance, legal, audit, relationship management, or any role where you haven't regularly used statistics — the Quantitative Analysis domain can feel like a foreign language. Terms like "heteroscedasticity," "chi-squared distribution," and "OLS estimator" are thrown around as if everyone learned them in school.
This guide bridges that gap. It covers the five core quantitative areas tested on FRM Part 1, explains each one in plain language with real risk management context, and connects every concept to how it actually appears on the exam. This is not a textbook replacement — it's a map that shows you what to learn, why it matters, and where to focus your time.
Foundation 1
Probability distributions are the mathematical language of uncertainty — and risk management is fundamentally about quantifying uncertainty. Every VaR calculation, every stress test, and every credit risk model rests on assumptions about how returns, losses, or defaults are distributed.
The bell curve. Symmetric around the mean, fully described by two parameters: mean (μ) and standard deviation (σ). In risk management, the normal distribution is the default assumption for asset returns in parametric VaR and for error terms in regression models. You need to know how to convert any value to a z-score (Z = (X − μ) / σ) and look up probabilities using the standard normal table. Key quantiles to memorize: 90% = 1.282, 95% = 1.645, 99% = 2.326.
If a variable's natural logarithm is normally distributed, the variable itself follows a lognormal distribution. This matters because asset prices can't go negative, but normal distributions allow negative values. The Black-Scholes option pricing model assumes stock prices are lognormally distributed (which means log returns are normal). On the exam, you'll see this in the context of geometric Brownian motion and option pricing.
Looks like a normal distribution but with heavier tails — meaning extreme events are more likely. Used when sample sizes are small (typically n < 30) or when the population variance is unknown. In FRM, you'll use the t-distribution for hypothesis testing and constructing confidence intervals for regression coefficients. As degrees of freedom increase, the t-distribution converges to the normal distribution.
A right-skewed distribution used for testing variance and for goodness-of-fit tests. In risk management, the chi-squared test appears in backtesting VaR models (testing whether the observed number of exceptions matches the expected number) and in testing whether a portfolio's volatility differs from a benchmark. The key formula: χ² = (n−1)s² / σ², where s² is the sample variance and σ² is the hypothesized population variance.
Expect 3–5 questions that directly test your understanding of distributions: computing probabilities from z-scores, choosing the right distribution for a given scenario, and understanding why fat tails matter for risk measurement. The most common trap: using the normal distribution when the t-distribution is appropriate (small sample size), which underestimates tail probabilities.
Foundation 2
2026 curriculum change: Hypothesis testing has shifted from conceptual to calculation-focused. You now need to compute test statistics, determine critical values, and construct confidence intervals with numerical precision — not just identify error types conceptually.
Hypothesis testing is the formal framework for making decisions from data. In risk management, it's used to determine whether a VaR model is accurate (backtesting), whether a regression coefficient is statistically significant, and whether portfolio returns differ from a benchmark. The procedure follows a fixed sequence:
The null hypothesis (H₀) is the default assumption — typically "no effect" or "no difference." The alternative hypothesis (H₁) is what you're trying to provide evidence for. Example: H₀: μ = 0 (the portfolio has zero alpha) vs. H₁: μ ≠ 0 (the portfolio has non-zero alpha).
Typically 5% (α = 0.05) or 1% (α = 0.01). This is the probability of rejecting H₀ when it's actually true (Type I error). A lower α means you need stronger evidence to reject the null.
For a mean test: t = (x̄ − μ₀) / (s / √n). For a regression coefficient: t = (β̂ − 0) / SE(β̂). The test statistic measures how far the sample result is from the null hypothesis value, in units of standard error.
If |t| > t-critical, reject H₀. Equivalently, if p-value < α, reject H₀. The p-value is the probability of observing a test statistic at least as extreme as yours, assuming H₀ is true. A small p-value means the data is unlikely under the null hypothesis.
Either "reject H₀" or "fail to reject H₀." Never say "accept H₀" — hypothesis testing can only reject or fail to reject the null. The distinction matters on the exam.
Type I Error (False Positive)
Rejecting H₀ when it's actually true. Probability = α. Example: concluding a VaR model is inaccurate when it's actually fine.
Type II Error (False Negative)
Failing to reject H₀ when it's actually false. Probability = β. Example: concluding a VaR model is fine when it's actually systematically underestimating risk.
Foundation 3
Linear regression is the workhorse of quantitative finance. It models the relationship between a dependent variable (what you're trying to explain) and one or more independent variables (the explanatory factors). In risk management, regression appears everywhere: estimating portfolio beta, modeling credit spreads, factor analysis, and stress testing. The FRM exam emphasizes interpretation over derivation — you need to understand what regression output means, not prove how OLS estimators are derived.
OLS finds the line (or hyperplane in multiple regression) that minimizes the sum of squared residuals — the vertical distances between observed data points and the fitted line. For a simple regression Y = α + βX + ε, the slope estimate is β̂ = Cov(X,Y) / Var(X). OLS is unbiased and efficient under the Gauss-Markov assumptions (linearity, exogeneity, homoscedasticity, no autocorrelation, and no perfect multicollinearity).
R² measures the proportion of variance in the dependent variable explained by the model. R² = 1 − (SSE / SST), where SSE is the sum of squared errors and SST is the total sum of squares. An R² of 0.75 means 75% of the variation is explained by the model. Important nuance for the exam: R² always increases when you add more variables — use Adjusted R² to penalize unnecessary complexity in multiple regression.
A coefficient is statistically significant if you can reject the null hypothesis that it equals zero. Test this with: t = β̂ / SE(β̂). If the absolute value of t exceeds the critical value at your chosen significance level, the coefficient is significant. On the exam, you might be given a regression output table and asked whether a specific variable is significant at 5% — just check if the t-statistic exceeds ~2.0 (a common rule-of-thumb for large samples).
The standard error of a coefficient measures the precision of the estimate. Smaller standard errors mean more precise estimates and larger t-statistics. Standard errors increase with multicollinearity (correlated independent variables) and heteroscedasticity (non-constant error variance). The exam tests whether you can identify conditions that inflate standard errors and the consequences for inference.
Multicollinearity
Inflated standard errors, unreliable individual coefficients (but overall model may still be fine)
Heteroscedasticity
Standard errors are biased, leading to incorrect t-statistics and unreliable hypothesis tests
Autocorrelation
Understated standard errors, overstated t-statistics, and inflated R². Detected by Durbin-Watson test
Omitted variable bias
Biased and inconsistent coefficient estimates if the omitted variable correlates with included variables
Foundation 4
Value at Risk (VaR) is where everything you've learned in quantitative analysis comes together. It answers a simple question: "What is the maximum loss I should expect over a given time period at a given confidence level?" VaR is the single most tested risk metric on both FRM Part 1 and Part 2. Understanding VaR requires probability distributions (Foundation 1), hypothesis testing for backtesting (Foundation 2), and regression for factor-based risk models (Foundation 3).
Assumes returns follow a normal distribution. The formula is straightforward: VaR = μ − z × σ, where z is the standard normal quantile for your confidence level (2.326 for 99%, 1.645 for 95%). For a portfolio with daily mean return of 0.05% and daily standard deviation of 1.5%, the 99% 1-day VaR = 0.05% − 2.326 × 1.5% = −3.44%. The main limitation: real financial returns have fatter tails than the normal distribution, so parametric VaR tends to underestimate extreme losses.
Uses actual historical returns — no distributional assumptions required. Sort the last 500 days of portfolio returns from worst to best. For 99% VaR, the VaR is the 5th-worst day (500 × 1% = 5). This naturally captures fat tails, skewness, and non-linear relationships. The weakness: it gives equal weight to all observations, so a calm period two years ago gets the same weight as last week's volatility spike. Also, with only 500 data points, the 99% tail estimate relies on just 5 observations.
Expected Shortfall answers a question VaR cannot: "When losses exceed VaR, how bad do they get?" ES is the average loss in the tail beyond VaR. At 97.5% confidence, ES is the average loss in the worst 2.5% of scenarios. Basel III now requires banks to use Expected Shortfall (at 97.5%) rather than VaR for market risk capital calculations because ES is a coherent risk measure — it satisfies sub-additivity, meaning the risk of a combined portfolio is never greater than the sum of individual risks.
To convert a 1-day VaR to a T-day VaR: VaR(T) = VaR(1) × √T. This assumes returns are independently and identically distributed (i.i.d.). A 1-day 99% VaR of $2.5 million scales to a 10-day VaR of $2.5M × √10 = $7.91M. The exam frequently tests this formula — and even more frequently tests whether you understand its assumptions. If returns are autocorrelated or volatility clusters (as in GARCH models), the square root of time rule breaks down.
For Non-Quant Backgrounds
The quant section isn't about raw talent — it's about structured practice. These six strategies help candidates without math backgrounds build quantitative fluency efficiently.
If you haven't taken a statistics course in years (or ever), invest 4–6 weeks in pre-study before beginning your official FRM prep. Cover basic probability, descriptive statistics, and the normal distribution. This upfront investment prevents the frustration of hitting a wall when FRM material assumes you know these concepts.
Do not rely on Excel, Python, or mental math. The exam requires you to compute test statistics, z-scores, regression coefficients, and VaR using only your approved calculator. Work through problems step by step on paper, then verify with your TI BA II Plus or HP 12C. Speed comes from repetition, not shortcuts.
Print the formula sheet and test yourself: cover the right column and try to reproduce each formula from memory. Any formula you can't recall is a study priority. The FRM exam does not provide formulas — everything must be memorized. Our free formula sheet is designed for exactly this kind of active recall practice.
Don't memorize formulas in isolation. VaR isn't just "z times sigma" — it's the threshold loss that won't be exceeded at a given confidence level. Expected Shortfall isn't just "average loss beyond VaR" — it's what regulators use because VaR ignores tail severity. Understanding the "why" makes recall easier and helps you answer tricky conceptual questions.
After learning a concept, immediately attempt practice questions on that topic. If you can't solve them, you've identified a gap. This is more efficient than re-reading the same chapter three times. PrepAscend's question bank lets you filter by domain and difficulty so you can target exactly the Quantitative Analysis topics you're weakest on.
The 2026 curriculum shifted hypothesis testing from conceptual to calculation-focused. This means you need to be able to compute test statistics, determine critical values, and construct confidence intervals with numerical precision — not just identify Type I vs. Type II errors conceptually. Practice the full test procedure end-to-end.
Pair this guide with the formula sheet: Every formula referenced in this guide is available on our free formula sheet. Use it for active recall practice — cover the formula column and test yourself daily until every expression is second nature.
PrepAscend has 115+ Quantitative Analysis practice questions with difficulty filtering, step-by-step solutions, and an AI coach that explains concepts at your level — whether you're a math PhD or a compliance officer touching statistics for the first time.
Also check out the formula sheet to review key formulas alongside this guide.
Start FRM Level 1 PrepPrepAscend is not affiliated with, endorsed by, or associated with the Global Association of Risk Professionals (GARP). FRM®, GARP®, and Financial Risk Manager® are trademarks owned by GARP.