Module 5 Mini-Project: Chapter 6 Scenario Analysis for RL¶

This notebook now uses an API-fed macro base instead of the earlier manual private-consumption table and local oil spreadsheets.

What this notebook does:

  • loads RL history and the Chapter 5 anchor,
  • pulls macro drivers from FRED and FMP,
  • builds the Chapter 6 base path from the latest API actuals with an explicit carry-forward rule,
  • keeps the scenario engine, diagnostics, and sensitivity framework in one shared path,
  • exports a compact Chapter 6 output pack.

Model scope in this version:

  • exogenous drivers used directly in the model: oil_price and private_consumption_qoq
  • API reference series loaded for later extension: cpi_index and savings_rate
  • named scenarios: base, upside, downside, liquidity_stress
  • main two-way sensitivity: oil_price x private_consumption_qoq
  • RL timing adjustment: calendar macro quarters are mapped into RL fiscal quarters before the forecast is run
  • important: the API replaces the hard-coded macro input table, but the scenario translation layer remains an explicit modeling assumption layer
from dataclasses import dataclass
from pathlib import Path

import numpy as np
import pandas as pd
import requests
from IPython.display import display

pd.set_option("display.max_columns", 200)
pd.set_option("display.width", 180)
pd.set_option("display.float_format", lambda x: f"{x:,.2f}")


def locate_project_root(start=None):
    fallback = Path(
        "/Users/theomachado/Library/CloudStorage/OneDrive-UniversityofFlorida/Spring 2026/FIN4453 - OFC"
    )
    start = Path.cwd() if start is None else Path(start)
    for candidate in [start, *start.parents, fallback]:
        if (candidate / "Mini-projects").exists() and (candidate / "DATA").exists():
            return candidate
    raise FileNotFoundError("Could not locate the FIN4453 project root.")


PROJECT_ROOT = locate_project_root()
MODULE4_OUT = PROJECT_ROOT / "Mini-projects" / "Module 4" / "outputs"
MODULE5_DIR = PROJECT_ROOT / "Mini-projects" / "Module 5"
OUT_DIR = MODULE5_DIR / "outputs"
OUT_DIR.mkdir(parents=True, exist_ok=True)

for required in [
    MODULE4_OUT / "chapter5_quarterly_inputs_usdm.csv",
    PROJECT_ROOT / "DATA" / "csv_export" / "RL_income_statement.csv",
    PROJECT_ROOT / "DATA" / "csv_export" / "RL_balance_sheet.csv",
    PROJECT_ROOT / "DATA" / "csv_export" / "RL_cash_flow.csv",
]:
    if not required.exists():
        raise FileNotFoundError(f"Missing required input: {required}")


display(
    pd.Series(
        {
            "project_root": str(PROJECT_ROOT),
            "module4_outputs": str(MODULE4_OUT),
            "module5_outputs": str(OUT_DIR),
        }
    )
)
project_root       /Users/theomachado/Library/CloudStorage/OneDri...
module4_outputs    /Users/theomachado/Library/CloudStorage/OneDri...
module5_outputs    /Users/theomachado/Library/CloudStorage/OneDri...
dtype: object

API Macro Setup¶

This notebook now uses one macro ingestion path.

Macro data used directly in the model:

  • FRED: private_consumption_qoq
  • FMP: oil_price

Reference series loaded for later extension:

  • FRED: cpi_index
  • FRED: savings_rate

For now, the forecast base is built with one transparent rule:

  • use the latest available API actual for each modeled driver,
  • carry that level forward across the forecast horizon,
  • apply the named scenario shocks on top of that shared base path.
FRED_API_KEY = '6350cdc2adf334b86512b700cbcd9874'
FMP_API_KEY = 'lrL1b1oxbNU58ZHMsP6aXzA3qJkPf98z'

FRED_BASE_URL = "https://api.stlouisfed.org/fred"
FMP_BASE_URL = "https://financialmodelingprep.com/stable"

API_MACRO_CONFIG = {
    "history_start": "2015-01-01",
    "forecast_horizon_quarters": 7,
    "fred_series": {
        "private_consumption_qoq": "DPCERL1Q225SBEA",
        "cpi_index": "CPIAUCSL",
        "savings_rate": "PSAVERT",
    },
    "fmp_symbols": {
        "oil_spot": "CLUSD",
    },
    "baseline_rules": {
        "private_consumption_qoq": "flat_last_actual",
        "oil_price": "flat_last_actual",
    },
}


display(
    pd.Series(
        {
            "fred_series_private_consumption": API_MACRO_CONFIG["fred_series"]["private_consumption_qoq"],
            "fred_series_cpi": API_MACRO_CONFIG["fred_series"]["cpi_index"],
            "fred_series_savings": API_MACRO_CONFIG["fred_series"]["savings_rate"],
            "fmp_oil_symbol": API_MACRO_CONFIG["fmp_symbols"]["oil_spot"],
            "forecast_horizon_quarters": API_MACRO_CONFIG["forecast_horizon_quarters"],
            "private_consumption_rule": API_MACRO_CONFIG["baseline_rules"]["private_consumption_qoq"],
            "oil_rule": API_MACRO_CONFIG["baseline_rules"]["oil_price"],
        }
    )
)
fred_series_private_consumption     DPCERL1Q225SBEA
fred_series_cpi                            CPIAUCSL
fred_series_savings                         PSAVERT
fmp_oil_symbol                                CLUSD
forecast_horizon_quarters                         7
private_consumption_rule           flat_last_actual
oil_rule                           flat_last_actual
dtype: object
def load_exogenous_api_data(
    start_date="2015-01-01",
    end_date=None,
    fred_series_map=None,
    oil_symbol="CLUSD",
):
    """
    Pull exogenous macro data from FRED + FMP using keys defined
    directly in the notebook as FRED_API_KEY and FMP_API_KEY.
    """

    if not FRED_API_KEY:
        raise ValueError("FRED_API_KEY is empty in the notebook.")
    if not FMP_API_KEY:
        raise ValueError("FMP_API_KEY is empty in the notebook.")

    if fred_series_map is None:
        fred_series_map = {
            "private_consumption_qoq": "DPCERL1Q225SBEA",
            "cpi_index": "CPIAUCSL",
            "savings_rate": "PSAVERT",
        }

    def _get_json(url, params, timeout=30):
        response = requests.get(url, params=params, timeout=timeout)
        response.raise_for_status()
        payload = response.json()

        if isinstance(payload, dict):
            error_message = (
                payload.get("error")
                or payload.get("Error Message")
                or payload.get("Error")
            )
            if error_message:
                raise ValueError(f"API error from {url}: {error_message}")

        return payload

    def _fetch_fred_series(series_id, frequency=None, aggregation_method="avg", units="lin"):
        params = {
            "series_id": series_id,
            "api_key": FRED_API_KEY,
            "file_type": "json",
            "observation_start": start_date,
            "units": units,
        }
        if end_date is not None:
            params["observation_end"] = end_date
        if frequency is not None:
            params["frequency"] = frequency
            params["aggregation_method"] = aggregation_method

        payload = _get_json(
            f"{FRED_BASE_URL}/series/observations",
            params=params,
        )
        observations = pd.DataFrame(payload.get("observations", []))

        if observations.empty:
            return pd.DataFrame(columns=["date", "value"])

        observations["date"] = pd.to_datetime(observations["date"], errors="coerce")
        observations["value"] = pd.to_numeric(observations["value"], errors="coerce")
        observations = observations.dropna(subset=["date", "value"]).copy()
        observations = observations.sort_values("date").reset_index(drop=True)
        return observations[["date", "value"]]

    def _fetch_fmp_historical_eod(symbol):
        params = {
            "symbol": symbol,
            "apikey": FMP_API_KEY,
        }

        payload = _get_json(
            f"{FMP_BASE_URL}/historical-price-eod/full",
            params=params,
        )

        if isinstance(payload, dict):
            if isinstance(payload.get("historical"), list):
                rows = payload["historical"]
            elif isinstance(payload.get("data"), list):
                rows = payload["data"]
            elif isinstance(payload.get("results"), list):
                rows = payload["results"]
            else:
                rows = []
        elif isinstance(payload, list):
            rows = payload
        else:
            rows = []

        history = pd.DataFrame(rows)

        if history.empty:
            return pd.DataFrame(columns=["date", "value"])

        price_col = "close" if "close" in history.columns else "price"
        history["date"] = pd.to_datetime(history["date"], errors="coerce")
        history[price_col] = pd.to_numeric(history[price_col], errors="coerce")
        history = history.dropna(subset=["date", price_col]).copy()

        if start_date is not None:
            history = history.loc[history["date"] >= pd.Timestamp(start_date)].copy()
        if end_date is not None:
            history = history.loc[history["date"] <= pd.Timestamp(end_date)].copy()

        history = history.sort_values("date").reset_index(drop=True)
        return history[["date", price_col]].rename(columns={price_col: "value"})

    def _to_quarterly(frame, value_col="value", agg="mean"):
        out = frame.copy()
        if out.empty:
            return pd.DataFrame(columns=["calendar_quarter", "value"])

        out["calendar_quarter"] = out["date"].dt.to_period("Q")
        quarterly = (
            out.groupby("calendar_quarter", as_index=False)[value_col]
            .agg(agg)
            .rename(columns={value_col: "value"})
        )
        quarterly["calendar_quarter"] = quarterly["calendar_quarter"].astype(str)
        return quarterly

    def _serialize_date_value(frame):
        return [
            {"date": row["date"].strftime("%Y-%m-%d"), "value": float(row["value"])}
            for _, row in frame.iterrows()
        ]

    def _serialize_quarterly(frame):
        return [
            {"calendar_quarter": row["calendar_quarter"], "value": float(row["value"])}
            for _, row in frame.iterrows()
        ]

    private_consumption = _fetch_fred_series(
        fred_series_map["private_consumption_qoq"]
    )
    private_consumption_q = _to_quarterly(private_consumption, value_col="value", agg="mean")

    cpi = _fetch_fred_series(
        fred_series_map["cpi_index"],
        frequency="q",
        aggregation_method="avg",
    )
    cpi_q = _to_quarterly(cpi, value_col="value", agg="mean")

    savings = _fetch_fred_series(
        fred_series_map["savings_rate"],
        frequency="q",
        aggregation_method="avg",
    )
    savings_q = _to_quarterly(savings, value_col="value", agg="mean")

    oil_daily = _fetch_fmp_historical_eod(oil_symbol)
    oil_quarterly = _to_quarterly(oil_daily, value_col="value", agg="mean")

    return {
        "private_consumption_qoq": _serialize_date_value(private_consumption),
        "private_consumption_qoq_quarterly": _serialize_quarterly(private_consumption_q),
        "cpi_index": _serialize_date_value(cpi),
        "cpi_index_quarterly": _serialize_quarterly(cpi_q),
        "savings_rate": _serialize_date_value(savings),
        "savings_rate_quarterly": _serialize_quarterly(savings_q),
        "oil_price_daily": _serialize_date_value(oil_daily),
        "oil_price_quarterly": _serialize_quarterly(oil_quarterly),
        "meta": {
            "history_start": start_date,
            "history_end": end_date,
            "oil_symbol": oil_symbol,
            "fred_series_map": fred_series_map,
        },
    }
exogenous_data = load_exogenous_api_data(
    start_date=API_MACRO_CONFIG["history_start"],
    fred_series_map=API_MACRO_CONFIG["fred_series"],
    oil_symbol=API_MACRO_CONFIG["fmp_symbols"]["oil_spot"],
)

api_macro_history_preview = (
    pd.DataFrame(exogenous_data["private_consumption_qoq_quarterly"])
    .rename(columns={"value": "private_consumption_qoq"})
    .merge(
        pd.DataFrame(exogenous_data["oil_price_quarterly"]).rename(columns={"value": "oil_price_actual"}),
        on="calendar_quarter",
        how="outer",
    )
    .merge(
        pd.DataFrame(exogenous_data["cpi_index_quarterly"]).rename(columns={"value": "cpi_index"}),
        on="calendar_quarter",
        how="outer",
    )
    .merge(
        pd.DataFrame(exogenous_data["savings_rate_quarterly"]).rename(columns={"value": "savings_rate"}),
        on="calendar_quarter",
        how="outer",
    )
    .sort_values("calendar_quarter")
    .reset_index(drop=True)
)

display(
    pd.Series(
        {
            "private_consumption_points": len(exogenous_data["private_consumption_qoq_quarterly"]),
            "oil_quarterly_points": len(exogenous_data["oil_price_quarterly"]),
            "cpi_quarterly_points": len(exogenous_data["cpi_index_quarterly"]),
            "savings_quarterly_points": len(exogenous_data["savings_rate_quarterly"]),
            "latest_private_consumption_quarter": api_macro_history_preview.dropna(subset=["private_consumption_qoq"]).iloc[-1]["calendar_quarter"],
            "latest_oil_quarter": api_macro_history_preview.dropna(subset=["oil_price_actual"]).iloc[-1]["calendar_quarter"],
        }
    )
)
display(api_macro_history_preview.tail(12))
private_consumption_points                44
oil_quarterly_points                      21
cpi_quarterly_points                      45
savings_quarterly_points                  44
latest_private_consumption_quarter    2025Q4
latest_oil_quarter                    2026Q2
dtype: object
calendar_quarter private_consumption_qoq oil_price_actual cpi_index savings_rate
34 2023Q3 3.10 82.22 305.99 5.50
35 2023Q4 3.00 78.50 308.19 5.50
36 2024Q1 1.70 76.91 311.00 6.10
37 2024Q2 3.90 80.67 313.08 5.80
38 2024Q3 4.00 75.39 314.12 5.10
39 2024Q4 3.90 70.30 316.59 4.70
40 2025Q1 0.60 71.52 319.48 5.10
41 2025Q2 2.50 63.82 320.79 5.00
42 2025Q3 3.50 64.99 323.24 4.40
43 2025Q4 1.90 59.09 325.55 4.00
44 2026Q1 NaN 72.61 328.11 NaN
45 2026Q2 NaN 100.94 NaN NaN
def payload_to_quarterly_frame(rows, value_name):
    frame = pd.DataFrame(rows)
    if frame.empty:
        return pd.DataFrame(columns=["calendar_quarter", value_name])

    if "calendar_quarter" not in frame.columns:
        frame["date"] = pd.to_datetime(frame["date"], errors="coerce")
        frame["calendar_quarter"] = frame["date"].dt.to_period("Q")
    else:
        frame["calendar_quarter"] = pd.PeriodIndex(frame["calendar_quarter"], freq="Q")

    frame["value"] = pd.to_numeric(frame["value"], errors="coerce")
    frame = frame.dropna(subset=["calendar_quarter", "value"]).copy()
    frame = frame.sort_values("calendar_quarter").drop_duplicates(subset=["calendar_quarter"], keep="last")
    return frame[["calendar_quarter", "value"]].rename(columns={"value": value_name}).reset_index(drop=True)


def build_api_macro_history_table(exogenous_data):
    private_consumption = payload_to_quarterly_frame(
        exogenous_data.get("private_consumption_qoq_quarterly", []),
        "private_consumption_qoq",
    )
    oil = payload_to_quarterly_frame(
        exogenous_data.get("oil_price_quarterly", []),
        "oil_price_actual",
    )
    cpi = payload_to_quarterly_frame(
        exogenous_data.get("cpi_index_quarterly", []),
        "cpi_index",
    )
    savings = payload_to_quarterly_frame(
        exogenous_data.get("savings_rate_quarterly", []),
        "savings_rate",
    )

    history = private_consumption.merge(oil, on="calendar_quarter", how="outer")
    history = history.merge(cpi, on="calendar_quarter", how="outer")
    history = history.merge(savings, on="calendar_quarter", how="outer")
    return history.sort_values("calendar_quarter").reset_index(drop=True)


def build_api_macro_snapshot(exogenous_data):
    macro_history = build_api_macro_history_table(exogenous_data)
    snapshot_rows = []
    driver_config = [
        ("private_consumption_qoq", "private_consumption_qoq", API_MACRO_CONFIG["baseline_rules"]["private_consumption_qoq"], "modeled"),
        ("oil_price", "oil_price_actual", API_MACRO_CONFIG["baseline_rules"]["oil_price"], "modeled"),
        ("cpi_index", "cpi_index", "reference_only", "reference"),
        ("savings_rate", "savings_rate", "reference_only", "reference"),
    ]

    for driver_name, column_name, rule_name, driver_role in driver_config:
        available = macro_history.dropna(subset=[column_name])
        if available.empty:
            snapshot_rows.append(
                {
                    "driver": driver_name,
                    "role": driver_role,
                    "latest_actual_quarter": None,
                    "latest_actual_value": np.nan,
                    "baseline_rule": rule_name,
                }
            )
            continue

        latest = available.iloc[-1]
        snapshot_rows.append(
            {
                "driver": driver_name,
                "role": driver_role,
                "latest_actual_quarter": str(latest["calendar_quarter"]),
                "latest_actual_value": float(latest[column_name]),
                "baseline_rule": rule_name,
            }
        )

    return pd.DataFrame(snapshot_rows)


def build_macro_base_from_api(history, exogenous_data, forecast_horizon_quarters=7):
    macro_history = build_api_macro_history_table(exogenous_data)

    private_consumption_history = macro_history.dropna(subset=["private_consumption_qoq"])
    oil_history = macro_history.dropna(subset=["oil_price_actual"])

    if private_consumption_history.empty:
        raise ValueError("No private-consumption history was loaded from the API payload.")
    if oil_history.empty:
        raise ValueError("No oil-price history was loaded from the API payload.")

    latest_private_consumption = private_consumption_history.iloc[-1]
    latest_oil = oil_history.iloc[-1]

    latest_history_calendar_quarter = history["date"].max().to_period("Q")
    forecast_quarters = pd.period_range(
        latest_history_calendar_quarter + 1,
        periods=forecast_horizon_quarters,
        freq="Q",
    )

    macro_base = pd.DataFrame({"calendar_quarter": forecast_quarters})
    macro_base["private_consumption_qoq"] = float(latest_private_consumption["private_consumption_qoq"])
    macro_base["oil_price_base"] = float(latest_oil["oil_price_actual"])
    macro_base["private_consumption_source_quarter"] = str(latest_private_consumption["calendar_quarter"])
    macro_base["oil_source_quarter"] = str(latest_oil["calendar_quarter"])
    macro_base["private_consumption_source_rule"] = "API latest actual carried forward"
    macro_base["oil_source_rule"] = "API latest actual carried forward"
    macro_base["macro_source_note"] = "API base path with flat carry-forward before scenario shocks"

    fiscal_map = macro_base["calendar_quarter"].apply(
        lambda q: pd.Series(calendar_to_rl(q), index=["fiscalYear", "period"])
    )
    macro_base = pd.concat([macro_base, fiscal_map], axis=1)
    macro_base["Period"] = "FY" + macro_base["fiscalYear"].astype(int).astype(str) + "-" + macro_base["period"]
    macro_base["days_in_quarter"] = macro_base["period"].map(RL_QUARTER_DAY_MAP)

    return macro_base[[
        "calendar_quarter",
        "Period",
        "fiscalYear",
        "period",
        "days_in_quarter",
        "private_consumption_qoq",
        "private_consumption_source_quarter",
        "private_consumption_source_rule",
        "oil_price_base",
        "oil_source_quarter",
        "oil_source_rule",
        "macro_source_note",
    ]].copy()
api_macro_snapshot = build_api_macro_snapshot(exogenous_data)
display(api_macro_snapshot)
driver role latest_actual_quarter latest_actual_value baseline_rule
0 private_consumption_qoq modeled 2025Q4 1.90 flat_last_actual
1 oil_price modeled 2026Q2 100.94 flat_last_actual
2 cpi_index reference 2026Q1 328.11 reference_only
3 savings_rate reference 2025Q4 4.00 reference_only

1. Load the Chapter 5 Anchor, RL History, and Build the API Macro Base¶

This section now has one clean flow:

  • Chapter 5 exports provide the last completed quarterly anchor,
  • RL raw history provides the calibration base,
  • the macro base is built from API-loaded exogenous data,
  • the forecast horizon uses the correct RL fiscal timing map,
  • the baseline rule is a flat carry-forward from the latest available API actual before scenario shocks are applied.
RL_QUARTER_DAY_MAP = {"Q1": 91, "Q2": 92, "Q3": 92, "Q4": 90}


def scale_to_mm(df):
    out = df.copy()
    numeric_cols = out.select_dtypes(include=[np.number]).columns
    for col in numeric_cols:
        if col != "fiscalYear":
            out[col] = out[col] / 1e6
    return out


def calendar_to_rl(period):
    if period.quarter == 1:
        return period.year, "Q4"
    if period.quarter == 2:
        return period.year + 1, "Q1"
    if period.quarter == 3:
        return period.year + 1, "Q2"
    return period.year + 1, "Q3"


def build_baseline_anchor():
    ch5_inputs = pd.read_csv(MODULE4_OUT / "chapter5_quarterly_inputs_usdm.csv", parse_dates=["date"])
    ch5_cf = pd.read_csv(MODULE4_OUT / "chapter5_quarterly_cash_flow_statement_usdm.csv")
    ch5_fcf = pd.read_csv(MODULE4_OUT / "chapter5_quarterly_fcf_outputs_usdm.csv")
    integrity = pd.read_csv(MODULE4_OUT / "chapter5_quarterly_integrity_report.csv")

    baseline = (
        ch5_inputs[["Period", "date", "revenue", "ebit", "netIncome", "cash_end_reported"]]
        .merge(ch5_cf[["Period", "CFO", "CFI", "CFF", "cash_end"]], on="Period", how="left")
        .merge(ch5_fcf[["Period", "FCFF", "FCFE_from_NI_bridge"]], on="Period", how="left")
        .merge(integrity[["Period", "passes_all_core_checks"]], on="Period", how="left")
        .sort_values("date")
        .reset_index(drop=True)
    )
    return baseline


def load_rl_history():
    data_dir = PROJECT_ROOT / "DATA" / "csv_export"
    is_df = scale_to_mm(pd.read_csv(data_dir / "RL_income_statement.csv", parse_dates=["date"]))
    bs_df = scale_to_mm(pd.read_csv(data_dir / "RL_balance_sheet.csv", parse_dates=["date"]))
    cf_df = scale_to_mm(pd.read_csv(data_dir / "RL_cash_flow.csv", parse_dates=["date"]))
    ch5_inputs = pd.read_csv(MODULE4_OUT / "chapter5_quarterly_inputs_usdm.csv", parse_dates=["date"])

    history = (
        is_df[[
            "date",
            "fiscalYear",
            "period",
            "revenue",
            "grossProfit",
            "ebit",
            "incomeBeforeTax",
            "incomeTaxExpense",
            "netIncome",
        ]]
        .merge(
            bs_df[[
                "date",
                "fiscalYear",
                "period",
                "cashAndCashEquivalents",
                "shortTermInvestments",
                "netReceivables",
                "inventory",
                "totalCurrentAssets",
                "accountPayables",
                "shortTermDebt",
                "totalCurrentLiabilities",
                "totalNonCurrentAssets",
                "longTermDebt",
                "totalNonCurrentLiabilities",
                "totalStockholdersEquity",
                "totalDebt",
            ]],
            on=["date", "fiscalYear", "period"],
            how="left",
        )
        .merge(
            cf_df[[
                "date",
                "fiscalYear",
                "period",
                "depreciationAndAmortization",
                "deferredIncomeTax",
                "stockBasedCompensation",
                "otherNonCashItems",
                "capitalExpenditure",
                "netDividendsPaid",
                "netCommonStockIssuance",
            ]],
            on=["date", "fiscalYear", "period"],
            how="left",
        )
        .merge(
            ch5_inputs[["date", "fiscalYear", "period", "interestExpense"]],
            on=["date", "fiscalYear", "period"],
            how="left",
        )
        .sort_values("date")
        .reset_index(drop=True)
    )

    history["Period"] = "FY" + history["fiscalYear"].astype(int).astype(str) + "-" + history["period"]
    history["days_in_quarter"] = history["period"].map(RL_QUARTER_DAY_MAP)
    history["cogs"] = history["revenue"] - history["grossProfit"]
    history["gross_margin"] = history["grossProfit"] / history["revenue"]
    history["ebit_margin"] = history["ebit"] / history["revenue"]
    history["opex_ratio"] = history["gross_margin"] - history["ebit_margin"]
    history["rev_yoy"] = history.groupby("period")["revenue"].pct_change()
    history["dso"] = history["netReceivables"] / history["revenue"] * history["days_in_quarter"]
    history["dio"] = history["inventory"] / history["cogs"] * history["days_in_quarter"]
    history["dpo"] = history["accountPayables"] / history["cogs"] * history["days_in_quarter"]
    history["other_current_assets_model"] = (
        history["totalCurrentAssets"]
        - history["cashAndCashEquivalents"]
        - history["shortTermInvestments"]
        - history["netReceivables"]
        - history["inventory"]
    )
    history["other_current_liabilities_model"] = (
        history["totalCurrentLiabilities"] - history["accountPayables"] - history["shortTermDebt"]
    )
    history["other_noncurrent_liabilities_model"] = history["totalNonCurrentLiabilities"] - history["longTermDebt"]

    for col in [
        "other_current_assets_model",
        "other_current_liabilities_model",
        "shortTermInvestments",
        "depreciationAndAmortization",
        "deferredIncomeTax",
        "stockBasedCompensation",
        "otherNonCashItems",
        "capitalExpenditure",
        "netDividendsPaid",
        "netCommonStockIssuance",
    ]:
        history[f"{col}_pct_rev"] = history[col] / history["revenue"]

    history["tax_rate"] = np.where(
        history["incomeBeforeTax"].abs() > 1e-9,
        history["incomeTaxExpense"] / history["incomeBeforeTax"],
        np.nan,
    )
    history["avg_debt"] = (history["totalDebt"] + history["totalDebt"].shift(1)) / 2.0
    history["interest_rate_q"] = history["interestExpense"] / history["avg_debt"]
    return history


baseline_anchor = build_baseline_anchor()
history = load_rl_history()
macro_history = build_api_macro_history_table(exogenous_data)
macro_base = build_macro_base_from_api(
    history,
    exogenous_data,
    forecast_horizon_quarters=API_MACRO_CONFIG["forecast_horizon_quarters"],
)

rl_timing_map = macro_base[["calendar_quarter", "Period", "fiscalYear", "period", "days_in_quarter"]].copy()

display(baseline_anchor.tail(8))
display(macro_history.tail(12))
display(macro_base)
display(rl_timing_map)
Period date revenue ebit netIncome cash_end_reported CFO CFI CFF cash_end FCFF FCFE_from_NI_bridge passes_all_core_checks
31 FY2024-Q4 2024-03-30 1,567.90 123.10 90.70 1,662.20 121.00 -52.60 -173.80 1,662.20 90.54 76.10 True
32 FY2025-Q1 2024-06-29 1,512.20 227.50 168.60 1,586.90 277.30 -87.60 -253.60 1,586.90 252.38 239.00 True
33 FY2025-Q2 2024-09-28 1,726.00 199.50 147.90 1,355.00 97.20 -192.20 -186.30 1,355.00 64.46 49.60 True
34 FY2025-Q3 2024-12-28 2,143.50 395.30 297.40 1,940.20 738.40 55.40 -131.30 1,940.20 686.19 671.50 True
35 FY2025-Q4 2025-03-29 1,697.30 172.50 129.00 1,922.50 122.20 -39.70 -132.80 1,922.50 50.41 36.80 True
36 FY2026-Q1 2025-06-28 1,719.10 289.50 220.40 2,090.20 176.10 -198.30 114.10 2,090.20 -2.08 481.00 True
37 FY2026-Q2 2025-09-27 2,010.70 244.10 207.50 1,444.40 53.20 -111.50 -573.30 1,444.40 -39.49 -446.10 True
38 FY2026-Q3 2025-12-27 2,406.00 477.10 361.60 2,034.00 779.60 -92.50 -99.30 2,034.00 714.37 704.00 True
calendar_quarter private_consumption_qoq oil_price_actual cpi_index savings_rate
34 2023Q3 3.10 82.22 305.99 5.50
35 2023Q4 3.00 78.50 308.19 5.50
36 2024Q1 1.70 76.91 311.00 6.10
37 2024Q2 3.90 80.67 313.08 5.80
38 2024Q3 4.00 75.39 314.12 5.10
39 2024Q4 3.90 70.30 316.59 4.70
40 2025Q1 0.60 71.52 319.48 5.10
41 2025Q2 2.50 63.82 320.79 5.00
42 2025Q3 3.50 64.99 323.24 4.40
43 2025Q4 1.90 59.09 325.55 4.00
44 2026Q1 NaN 72.61 328.11 NaN
45 2026Q2 NaN 100.94 NaN NaN
calendar_quarter Period fiscalYear period days_in_quarter private_consumption_qoq private_consumption_source_quarter private_consumption_source_rule oil_price_base oil_source_quarter oil_source_rule macro_source_note
0 2026Q1 FY2026-Q4 2026 Q4 90 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
1 2026Q2 FY2027-Q1 2027 Q1 91 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
2 2026Q3 FY2027-Q2 2027 Q2 92 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
3 2026Q4 FY2027-Q3 2027 Q3 92 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
4 2027Q1 FY2027-Q4 2027 Q4 90 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
5 2027Q2 FY2028-Q1 2028 Q1 91 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
6 2027Q3 FY2028-Q2 2028 Q2 92 1.90 2025Q4 API latest actual carried forward 100.94 2026Q2 API latest actual carried forward API base path with flat carry-forward before s...
calendar_quarter Period fiscalYear period days_in_quarter
0 2026Q1 FY2026-Q4 2026 Q4 90
1 2026Q2 FY2027-Q1 2027 Q1 91
2 2026Q3 FY2027-Q2 2027 Q2 92
3 2026Q4 FY2027-Q3 2027 Q3 92
4 2027Q1 FY2027-Q4 2027 Q4 90
5 2027Q2 FY2028-Q1 2028 Q1 91
6 2027Q3 FY2028-Q2 2028 Q2 92

2. Calibrate the Base Model and Define the Scenario Layer¶

To stay close to the chapter PDF, the notebook still uses:

  • a same-quarter calibration profile from recent RL history,
  • a translation table from macro factors to operating drivers,
  • a minimal ScenarioConfig where the case differences live in the configuration layer rather than in separate formulas.

What changed in this section is only the source of the macro base. The translation layer below is still an explicit modeling assumption layer, not something inferred directly from the API pull.

TRANSLATION_BETAS = {
    "revenue_yoy_pc_beta": 0.06,
    "revenue_yoy_oil_beta": -0.10,
    "gross_margin_pc_beta": 0.005,
    "gross_margin_oil_beta": -0.025,
    "opex_ratio_pc_beta": -0.008,
    "opex_ratio_oil_beta": 0.015,
    "dso_pc_beta_days": -0.5,
    "dso_oil_beta_days": 4.0,
    "dio_pc_beta_days": -2.0,
    "dio_oil_beta_days": 10.0,
    "dpo_pc_beta_days": 0.0,
    "dpo_oil_beta_days": 4.0,
    "other_ca_pc_beta_mult": -0.03,
    "other_ca_oil_beta_mult": 0.02,
    "other_cl_pc_beta_mult": -0.02,
    "other_cl_oil_beta_mult": 0.02,
    "sti_oil_beta_mult": 0.02,
    "cash_deleveraging_buffer": 250.0,
}


def build_calibration(history):
    profile_columns = [
        "rev_yoy",
        "gross_margin",
        "opex_ratio",
        "ebit_margin",
        "dso",
        "dio",
        "dpo",
        "other_current_assets_model_pct_rev",
        "other_current_liabilities_model_pct_rev",
        "shortTermInvestments_pct_rev",
        "depreciationAndAmortization_pct_rev",
        "deferredIncomeTax_pct_rev",
        "stockBasedCompensation_pct_rev",
        "otherNonCashItems_pct_rev",
        "capitalExpenditure_pct_rev",
        "netDividendsPaid_pct_rev",
        "netCommonStockIssuance_pct_rev",
    ]

    profiles = []
    for period in ["Q1", "Q2", "Q3", "Q4"]:
        tail = history.loc[history["period"] == period].tail(2)
        row = {"period": period}
        for col in profile_columns:
            row[col] = tail[col].mean()
        profiles.append(row)

    quarter_profiles = pd.DataFrame(profiles).set_index("period")
    tax_rate_default = history.loc[history["tax_rate"].between(-0.1, 0.5), "tax_rate"].tail(8).median()
    if pd.isna(tax_rate_default):
        tax_rate_default = 0.21

    interest_rate_q = history["interest_rate_q"].tail(4).mean()
    if pd.isna(interest_rate_q):
        interest_rate_q = 0.0035

    return {
        "quarter_profiles": quarter_profiles,
        "tax_rate_default": float(tax_rate_default),
        "interest_rate_q": float(interest_rate_q),
        "opening_state": history.iloc[-1].copy(),
        "opening_short_term_debt": float(history.iloc[-1]["shortTermDebt"]),
    }


calibration = build_calibration(history)
macro_reference = {
    "private_consumption_reference": float(macro_base["private_consumption_qoq"].mean()),
    "oil_price_reference": float(macro_base["oil_price_base"].mean()),
}

quarter_profile_display = (
    calibration["quarter_profiles"]
    .rename(
        columns={
            "rev_yoy": "revenue_yoy_base",
            "gross_margin": "gross_margin_base",
            "opex_ratio": "opex_ratio_base",
            "ebit_margin": "ebit_margin_base",
            "dso": "dso_base",
            "dio": "dio_base",
            "dpo": "dpo_base",
            "other_current_assets_model_pct_rev": "other_current_assets_pct_base",
            "other_current_liabilities_model_pct_rev": "other_current_liabilities_pct_base",
            "shortTermInvestments_pct_rev": "short_term_investments_pct_base",
            "depreciationAndAmortization_pct_rev": "depreciation_pct_base",
            "deferredIncomeTax_pct_rev": "deferred_tax_pct_base",
            "stockBasedCompensation_pct_rev": "stock_comp_pct_base",
            "otherNonCashItems_pct_rev": "other_non_cash_pct_base",
            "capitalExpenditure_pct_rev": "capex_pct_base",
            "netDividendsPaid_pct_rev": "dividend_pct_base",
            "netCommonStockIssuance_pct_rev": "buyback_pct_base",
        }
    )
    .reset_index()
)
quarter_profile_display["tax_rate_default"] = calibration["tax_rate_default"]
quarter_profile_display["interest_rate_q_default"] = calibration["interest_rate_q"]

translation_table = pd.DataFrame(
    [
        {
            "driver": "revenue_yoy",
            "private_consumption_effect": TRANSLATION_BETAS["revenue_yoy_pc_beta"],
            "oil_effect": TRANSLATION_BETAS["revenue_yoy_oil_beta"],
            "comment": "additive shift to same-quarter revenue growth baseline",
        },
        {
            "driver": "gross_margin",
            "private_consumption_effect": TRANSLATION_BETAS["gross_margin_pc_beta"],
            "oil_effect": TRANSLATION_BETAS["gross_margin_oil_beta"],
            "comment": "stronger demand supports pricing; higher oil compresses margin",
        },
        {
            "driver": "opex_ratio",
            "private_consumption_effect": TRANSLATION_BETAS["opex_ratio_pc_beta"],
            "oil_effect": TRANSLATION_BETAS["opex_ratio_oil_beta"],
            "comment": "operating leverage improves under stronger demand",
        },
        {
            "driver": "working_capital_days",
            "private_consumption_effect": "DSO/DIO/DPO day shifts",
            "oil_effect": "DSO/DIO/DPO day shifts",
            "comment": "days-based working-capital response layer",
        },
    ]
)


@dataclass(frozen=True)
class ScenarioConfig:
    name: str
    overrides: dict
    notes: str = ""
    cash_floor: float = 1200.0
    order: int = 0


scenario_registry = [
    ScenarioConfig(
        name="base",
        overrides={
            "oil_pct_shock": 0.00,
            "private_consumption_pp_shock": 0.00,
            "capex_multiplier": 1.00,
            "dividend_multiplier": 1.00,
            "buyback_multiplier": 1.00,
        },
        notes="Consensus oil path and consensus private-consumption path.",
        cash_floor=1200.0,
        order=0,
    ),
    ScenarioConfig(
        name="upside",
        overrides={
            "oil_pct_shock": -0.05,
            "private_consumption_pp_shock": 0.30,
            "capex_multiplier": 1.05,
            "dividend_multiplier": 1.00,
            "buyback_multiplier": 1.10,
        },
        notes="Lower oil and firmer demand with modestly stronger capital returns.",
        cash_floor=1200.0,
        order=1,
    ),
    ScenarioConfig(
        name="downside",
        overrides={
            "oil_pct_shock": 0.10,
            "private_consumption_pp_shock": -0.50,
            "capex_multiplier": 0.90,
            "dividend_multiplier": 1.00,
            "buyback_multiplier": 0.50,
        },
        notes="Higher oil and softer demand with partial buyback restraint.",
        cash_floor=1250.0,
        order=2,
    ),
    ScenarioConfig(
        name="liquidity_stress",
        overrides={
            "oil_pct_shock": 0.20,
            "private_consumption_pp_shock": -1.00,
            "capex_multiplier": 0.90,
            "dividend_multiplier": 1.00,
            "buyback_multiplier": 0.50,
        },
        notes="Oil spike and sharp demand slowdown with a higher operating cash floor.",
        cash_floor=1400.0,
        order=3,
    ),
]

scenario_registry_df = pd.DataFrame(
    [
        {
            "scenario": scenario.name,
            "order": scenario.order,
            "notes": scenario.notes,
            "cash_floor": scenario.cash_floor,
            **scenario.overrides,
        }
        for scenario in scenario_registry
    ]
).sort_values("order")

display(pd.Series(macro_reference))
display(quarter_profile_display)
display(translation_table)
display(scenario_registry_df)
private_consumption_reference     1.90
oil_price_reference             100.94
dtype: float64
period revenue_yoy_base gross_margin_base opex_ratio_base ebit_margin_base dso_base dio_base dpo_base other_current_assets_pct_base other_current_liabilities_pct_base short_term_investments_pct_base depreciation_pct_base deferred_tax_pct_base stock_comp_pct_base other_non_cash_pct_base capex_pct_base dividend_pct_base buyback_pct_base tax_rate_default interest_rate_q_default
0 Q1 0.07 0.71 0.55 0.16 29.17 222.54 106.83 0.10 0.73 0.11 0.03 0.00 0.01 0.00 -0.07 -0.03 -0.16 0.21 0.00
1 Q2 0.11 0.67 0.56 0.12 33.00 181.03 78.02 0.08 0.61 0.15 0.03 0.00 0.02 0.00 -0.04 -0.03 -0.07 0.21 0.00
2 Q3 0.12 0.69 0.50 0.19 24.37 140.81 67.77 0.08 0.56 0.09 0.03 0.00 0.01 -0.00 -0.03 -0.02 -0.03 0.21 0.00
3 Q4 0.05 0.68 0.59 0.09 32.33 157.77 65.40 0.08 0.74 0.09 0.03 -0.03 0.01 0.00 -0.04 -0.03 -0.06 0.21 0.00
driver private_consumption_effect oil_effect comment
0 revenue_yoy 0.06 -0.10 additive shift to same-quarter revenue growth ...
1 gross_margin 0.01 -0.03 stronger demand supports pricing; higher oil c...
2 opex_ratio -0.01 0.01 operating leverage improves under stronger demand
3 working_capital_days DSO/DIO/DPO day shifts DSO/DIO/DPO day shifts days-based working-capital response layer
scenario order notes cash_floor oil_pct_shock private_consumption_pp_shock capex_multiplier dividend_multiplier buyback_multiplier
0 base 0 Consensus oil path and consensus private-consu... 1,200.00 0.00 0.00 1.00 1.00 1.00
1 upside 1 Lower oil and firmer demand with modestly stro... 1,200.00 -0.05 0.30 1.05 1.00 1.10
2 downside 2 Higher oil and softer demand with partial buyb... 1,250.00 0.10 -0.50 0.90 1.00 0.50
3 liquidity_stress 3 Oil spike and sharp demand slowdown with a hig... 1,400.00 0.20 -1.00 0.90 1.00 0.50

3. One Shared Forecasting Path for Every Scenario¶

This is the Chapter 6 core rule: the formulas stay fixed and the scenarios only change the assumptions. The configuration layer controls the overrides, while the forecasting and diagnostic functions are shared by all runs.

PROFILE_RENAME = {
    "gross_margin": "gross_margin_base",
    "opex_ratio": "opex_ratio_base",
    "ebit_margin": "ebit_margin_base",
    "rev_yoy": "revenue_yoy_base",
    "dso": "dso_base",
    "dio": "dio_base",
    "dpo": "dpo_base",
    "other_current_assets_model_pct_rev": "other_current_assets_pct_base",
    "other_current_liabilities_model_pct_rev": "other_current_liabilities_pct_base",
    "shortTermInvestments_pct_rev": "short_term_investments_pct_base",
    "depreciationAndAmortization_pct_rev": "depreciation_pct_base",
    "deferredIncomeTax_pct_rev": "deferred_tax_pct_base",
    "stockBasedCompensation_pct_rev": "stock_comp_pct_base",
    "otherNonCashItems_pct_rev": "other_non_cash_pct_base",
    "capitalExpenditure_pct_rev": "capex_pct_base",
    "netDividendsPaid_pct_rev": "dividend_pct_base",
    "netCommonStockIssuance_pct_rev": "buyback_pct_base",
}


def build_driver_path(macro_base, calibration, scenario, macro_reference):
    profiles = calibration["quarter_profiles"].reset_index().rename(columns=PROFILE_RENAME)
    driver_path = macro_base.merge(profiles, on="period", how="left").copy()

    oil_pct_shock = float(scenario.overrides.get("oil_pct_shock", 0.0))
    private_consumption_pp_shock = float(scenario.overrides.get("private_consumption_pp_shock", 0.0))
    capex_multiplier = float(scenario.overrides.get("capex_multiplier", 1.0))
    dividend_multiplier = float(scenario.overrides.get("dividend_multiplier", 1.0))
    buyback_multiplier = float(scenario.overrides.get("buyback_multiplier", 1.0))

    driver_path["scenario"] = scenario.name
    driver_path["scenario_order"] = scenario.order
    driver_path["scenario_notes"] = scenario.notes
    driver_path["cash_floor"] = scenario.cash_floor
    driver_path["oil_pct_shock"] = oil_pct_shock
    driver_path["private_consumption_pp_shock"] = private_consumption_pp_shock

    driver_path["oil_price_scenario"] = driver_path["oil_price_base"] * (1.0 + oil_pct_shock)
    driver_path["private_consumption_qoq_scenario"] = (
        driver_path["private_consumption_qoq"] + private_consumption_pp_shock
    )

    driver_path["oil_gap_pct"] = driver_path["oil_price_scenario"] / macro_reference["oil_price_reference"] - 1.0
    driver_path["private_consumption_gap"] = (
        driver_path["private_consumption_qoq_scenario"] - macro_reference["private_consumption_reference"]
    )

    driver_path["revenue_yoy_assumption"] = (
        driver_path["revenue_yoy_base"]
        + TRANSLATION_BETAS["revenue_yoy_pc_beta"] * driver_path["private_consumption_gap"]
        + TRANSLATION_BETAS["revenue_yoy_oil_beta"] * driver_path["oil_gap_pct"]
    )
    driver_path["gross_margin_assumption"] = (
        driver_path["gross_margin_base"]
        + TRANSLATION_BETAS["gross_margin_pc_beta"] * driver_path["private_consumption_gap"]
        + TRANSLATION_BETAS["gross_margin_oil_beta"] * driver_path["oil_gap_pct"]
    )
    driver_path["opex_ratio_assumption"] = (
        driver_path["opex_ratio_base"]
        + TRANSLATION_BETAS["opex_ratio_pc_beta"] * driver_path["private_consumption_gap"]
        + TRANSLATION_BETAS["opex_ratio_oil_beta"] * driver_path["oil_gap_pct"]
    )
    driver_path["ebit_margin_assumption"] = (
        driver_path["gross_margin_assumption"] - driver_path["opex_ratio_assumption"]
    )

    driver_path["dso_assumption"] = (
        driver_path["dso_base"]
        + TRANSLATION_BETAS["dso_pc_beta_days"] * driver_path["private_consumption_gap"]
        + TRANSLATION_BETAS["dso_oil_beta_days"] * driver_path["oil_gap_pct"]
    )
    driver_path["dio_assumption"] = (
        driver_path["dio_base"]
        + TRANSLATION_BETAS["dio_pc_beta_days"] * driver_path["private_consumption_gap"]
        + TRANSLATION_BETAS["dio_oil_beta_days"] * driver_path["oil_gap_pct"]
    )
    driver_path["dpo_assumption"] = (
        driver_path["dpo_base"]
        + TRANSLATION_BETAS["dpo_pc_beta_days"] * driver_path["private_consumption_gap"]
        + TRANSLATION_BETAS["dpo_oil_beta_days"] * driver_path["oil_gap_pct"]
    )

    driver_path["other_current_assets_pct_assumption"] = (
        driver_path["other_current_assets_pct_base"]
        * (
            1.0
            + TRANSLATION_BETAS["other_ca_pc_beta_mult"] * driver_path["private_consumption_gap"]
            + TRANSLATION_BETAS["other_ca_oil_beta_mult"] * driver_path["oil_gap_pct"]
        )
    )
    driver_path["other_current_liabilities_pct_assumption"] = (
        driver_path["other_current_liabilities_pct_base"]
        * (
            1.0
            + TRANSLATION_BETAS["other_cl_pc_beta_mult"] * driver_path["private_consumption_gap"]
            + TRANSLATION_BETAS["other_cl_oil_beta_mult"] * driver_path["oil_gap_pct"]
        )
    )
    driver_path["short_term_investments_pct_assumption"] = (
        driver_path["short_term_investments_pct_base"]
        * (1.0 + TRANSLATION_BETAS["sti_oil_beta_mult"] * driver_path["oil_gap_pct"])
    )

    driver_path["depreciation_pct_assumption"] = driver_path["depreciation_pct_base"]
    driver_path["deferred_tax_pct_assumption"] = driver_path["deferred_tax_pct_base"]
    driver_path["stock_comp_pct_assumption"] = driver_path["stock_comp_pct_base"]
    driver_path["other_non_cash_pct_assumption"] = driver_path["other_non_cash_pct_base"]
    driver_path["capex_pct_assumption"] = driver_path["capex_pct_base"].abs() * capex_multiplier
    driver_path["dividend_pct_assumption"] = driver_path["dividend_pct_base"].abs() * dividend_multiplier
    driver_path["buyback_pct_assumption"] = driver_path["buyback_pct_base"].abs() * buyback_multiplier
    driver_path["tax_rate_assumption"] = calibration["tax_rate_default"]
    driver_path["interest_rate_q_assumption"] = calibration["interest_rate_q"]
    return driver_path


def run_case(driver_path, calibration, history, scenario):
    opening = calibration["opening_state"]
    opening_short_term_debt = calibration["opening_short_term_debt"]
    prior_revenue_lookup = history.set_index("Period")["revenue"].to_dict()

    prev_cash = float(opening["cashAndCashEquivalents"])
    prev_sti = float(opening["shortTermInvestments"])
    prev_receivables = float(opening["netReceivables"])
    prev_inventory = float(opening["inventory"])
    prev_other_ca = float(opening["other_current_assets_model"])
    prev_ap = float(opening["accountPayables"])
    prev_other_cl = float(opening["other_current_liabilities_model"])
    prev_noncurrent_assets = float(opening["totalNonCurrentAssets"])
    prev_short_term_debt = float(opening["shortTermDebt"])
    prev_long_term_debt = float(opening["longTermDebt"])
    prev_other_ncl = float(opening["other_noncurrent_liabilities_model"])
    prev_equity = float(opening["totalStockholdersEquity"])

    forecast_rows = []
    diagnostic_rows = []

    for _, row in driver_path.sort_values("calendar_quarter").iterrows():
        prior_same_period = f"FY{int(row['fiscalYear']) - 1}-{row['period']}"
        if prior_same_period not in prior_revenue_lookup:
            raise KeyError(f"Missing prior-year quarter revenue for {prior_same_period}")

        revenue = float(prior_revenue_lookup[prior_same_period] * (1.0 + row["revenue_yoy_assumption"]))
        prior_revenue_lookup[row["Period"]] = revenue

        gross_margin = float(row["gross_margin_assumption"])
        opex_ratio = float(row["opex_ratio_assumption"])
        ebit_margin = float(row["ebit_margin_assumption"])
        gross_profit = revenue * gross_margin
        cogs = revenue - gross_profit
        ebit = revenue * ebit_margin

        days = float(row["days_in_quarter"])
        dso = float(row["dso_assumption"])
        dio = float(row["dio_assumption"])
        dpo = float(row["dpo_assumption"])

        receivables_end = revenue * dso / days
        inventory_end = cogs * dio / days
        ap_end = cogs * dpo / days
        other_ca_end = revenue * float(row["other_current_assets_pct_assumption"])
        other_cl_end = revenue * float(row["other_current_liabilities_pct_assumption"])

        depreciation = revenue * float(row["depreciation_pct_assumption"])
        deferred_tax = revenue * float(row["deferred_tax_pct_assumption"])
        stock_comp = revenue * float(row["stock_comp_pct_assumption"])
        other_non_cash = revenue * float(row["other_non_cash_pct_assumption"])

        capex = -revenue * float(row["capex_pct_assumption"])
        dividends = -revenue * float(row["dividend_pct_assumption"])
        buybacks = -revenue * float(row["buyback_pct_assumption"])

        interest_expense = (prev_short_term_debt + prev_long_term_debt) * float(row["interest_rate_q_assumption"])
        pretax_income = ebit - interest_expense
        income_tax = pretax_income * float(row["tax_rate_assumption"])
        net_income = pretax_income - income_tax

        change_in_owc_use_of_cash = (
            (receivables_end - prev_receivables)
            + (inventory_end - prev_inventory)
            + (other_ca_end - prev_other_ca)
            - (ap_end - prev_ap)
            - (other_cl_end - prev_other_cl)
        )
        change_in_working_capital = -change_in_owc_use_of_cash

        cfo = net_income + depreciation + deferred_tax + stock_comp + other_non_cash + change_in_working_capital

        short_term_investments_end = revenue * float(row["short_term_investments_pct_assumption"])
        delta_short_term_investments = short_term_investments_end - prev_sti
        cfi = capex - delta_short_term_investments

        cff_pre_debt = dividends + buybacks
        cash_pre_debt = prev_cash + cfo + cfi + cff_pre_debt
        debt_draw = max(float(row["cash_floor"]) - cash_pre_debt, 0.0)
        debt_paydown_capacity = max(
            cash_pre_debt - (float(row["cash_floor"]) + TRANSLATION_BETAS["cash_deleveraging_buffer"]),
            0.0,
        )
        debt_paydown = min(debt_paydown_capacity, prev_short_term_debt + debt_draw)
        net_debt_issuance = debt_draw - debt_paydown
        short_term_debt_end = max(prev_short_term_debt + net_debt_issuance, 0.0)
        cash_end = cash_pre_debt + net_debt_issuance
        cff = cff_pre_debt + net_debt_issuance

        total_current_assets = cash_end + short_term_investments_end + receivables_end + inventory_end + other_ca_end
        noncurrent_assets_end = prev_noncurrent_assets + abs(capex) - depreciation
        total_assets = total_current_assets + noncurrent_assets_end

        total_current_liabilities = ap_end + short_term_debt_end + other_cl_end
        total_noncurrent_liabilities = prev_long_term_debt + prev_other_ncl
        total_liabilities = total_current_liabilities + total_noncurrent_liabilities

        equity_roll_forward_expected = prev_equity + net_income + dividends + buybacks
        equity_model = total_assets - total_liabilities
        balance_error = equity_model - equity_roll_forward_expected

        ebitda = ebit + depreciation
        net_debt = short_term_debt_end + prev_long_term_debt - cash_end
        net_debt_to_ebitda = net_debt / max(ebitda, 1e-6)
        cash_headroom = cash_end - float(row["cash_floor"])
        incremental_revolver_draw = max(short_term_debt_end - opening_short_term_debt, 0.0)

        hard_fail_reasons = []
        if cash_end + 1e-6 < float(row["cash_floor"]):
            hard_fail_reasons.append("cash floor breached")
        if revenue <= 0:
            hard_fail_reasons.append("non-positive revenue")
        if gross_margin <= 0:
            hard_fail_reasons.append("non-positive gross margin")
        if opex_ratio <= 0:
            hard_fail_reasons.append("non-positive opex ratio")
        if ebit_margin <= -0.15:
            hard_fail_reasons.append("ebit margin below model floor")
        if min(
            receivables_end,
            inventory_end,
            ap_end,
            other_ca_end,
            other_cl_end,
            short_term_investments_end,
            short_term_debt_end,
        ) < -1e-6:
            hard_fail_reasons.append("negative modeled balance")

        soft_warning_reasons = []
        if abs(balance_error) > 100.0:
            soft_warning_reasons.append("large balance-sheet roll gap")
        if net_debt_to_ebitda > 3.0:
            soft_warning_reasons.append("net debt to EBITDA above 3.0x")
        if cash_headroom < 100.0:
            soft_warning_reasons.append("cash headroom below 100 USD mm")
        if incremental_revolver_draw > 0:
            soft_warning_reasons.append("incremental revolver draw above opening level")
        if row["revenue_yoy_assumption"] < 0:
            soft_warning_reasons.append("negative same-quarter revenue growth")

        period_status = "FAIL" if hard_fail_reasons else "WARN" if soft_warning_reasons else "PASS"

        forecast_rows.append(
            {
                "scenario": scenario.name,
                "scenario_order": scenario.order,
                "Period": row["Period"],
                "calendar_quarter": row["calendar_quarter"],
                "fiscalYear": row["fiscalYear"],
                "period": row["period"],
                "oil_price": row["oil_price_scenario"],
                "private_consumption_qoq": row["private_consumption_qoq_scenario"],
                "revenue_yoy_assumption": row["revenue_yoy_assumption"],
                "gross_margin_assumption": gross_margin,
                "opex_ratio_assumption": opex_ratio,
                "ebit_margin_assumption": ebit_margin,
                "revenue": revenue,
                "gross_profit": gross_profit,
                "cogs": cogs,
                "ebit": ebit,
                "interest_expense": interest_expense,
                "pretax_income": pretax_income,
                "income_tax": income_tax,
                "net_income": net_income,
                "dso": dso,
                "dio": dio,
                "dpo": dpo,
                "net_receivables": receivables_end,
                "inventory": inventory_end,
                "account_payables": ap_end,
                "other_current_assets_model": other_ca_end,
                "other_current_liabilities_model": other_cl_end,
                "CFO": cfo,
                "capital_expenditure": capex,
                "CFI": cfi,
                "net_dividends_paid": dividends,
                "net_common_stock_issuance": buybacks,
                "net_debt_issuance": net_debt_issuance,
                "CFF": cff,
                "cash_begin": prev_cash,
                "cash_end": cash_end,
                "cash_floor": row["cash_floor"],
                "cash_headroom": cash_headroom,
                "short_term_investments": short_term_investments_end,
                "short_term_debt": short_term_debt_end,
                "long_term_debt": prev_long_term_debt,
                "total_assets": total_assets,
                "total_liabilities": total_liabilities,
                "equity_model": equity_model,
                "equity_roll_forward_expected": equity_roll_forward_expected,
                "balance_error": balance_error,
                "EBITDA": ebitda,
                "net_debt": net_debt,
                "NetDebt_to_EBITDA": net_debt_to_ebitda,
                "incremental_revolver_draw": incremental_revolver_draw,
                "period_status": period_status,
            }
        )

        diagnostic_rows.append(
            {
                "scenario": scenario.name,
                "scenario_order": scenario.order,
                "Period": row["Period"],
                "calendar_quarter": row["calendar_quarter"],
                "hard_fail": bool(hard_fail_reasons),
                "hard_fail_count": len(hard_fail_reasons),
                "hard_fail_reasons": "; ".join(hard_fail_reasons),
                "soft_warning_count": len(soft_warning_reasons),
                "soft_warning_reasons": "; ".join(soft_warning_reasons),
                "cash_headroom": cash_headroom,
                "balance_error_abs": abs(balance_error),
                "incremental_revolver_draw": incremental_revolver_draw,
                "NetDebt_to_EBITDA": net_debt_to_ebitda,
                "period_status": period_status,
            }
        )

        prev_cash = cash_end
        prev_sti = short_term_investments_end
        prev_receivables = receivables_end
        prev_inventory = inventory_end
        prev_other_ca = other_ca_end
        prev_ap = ap_end
        prev_other_cl = other_cl_end
        prev_noncurrent_assets = noncurrent_assets_end
        prev_short_term_debt = short_term_debt_end
        prev_equity = equity_model

    forecast = pd.DataFrame(forecast_rows)
    diagnostics = pd.DataFrame(diagnostic_rows)
    status = pd.DataFrame(
        [
            {
                "scenario": scenario.name,
                "scenario_order": scenario.order,
                "status": "FAIL" if diagnostics["hard_fail"].any() else "WARN" if diagnostics["soft_warning_count"].sum() > 0 else "PASS",
                "notes": scenario.notes,
                "cash_floor": scenario.cash_floor,
                "latest_period": forecast["Period"].iloc[-1],
                "hard_fail_count": int(diagnostics["hard_fail_count"].sum()),
                "soft_warning_count": int(diagnostics["soft_warning_count"].sum()),
                "min_cash": float(forecast["cash_end"].min()),
                "ending_cash": float(forecast["cash_end"].iloc[-1]),
                "min_cash_headroom": float(forecast["cash_headroom"].min()),
                "ending_short_term_debt": float(forecast["short_term_debt"].iloc[-1]),
                "max_short_term_debt": float(forecast["short_term_debt"].max()),
                "max_incremental_revolver_draw": float(forecast["incremental_revolver_draw"].max()),
                "max_net_debt_to_ebitda": float(forecast["NetDebt_to_EBITDA"].max()),
                "max_balance_error_abs": float(abs(forecast["balance_error"]).max()),
            }
        ]
    )
    return {"forecast": forecast, "diagnostics": diagnostics, "status": status}


def run_named_scenarios(history, calibration, macro_base, scenario_registry, macro_reference):
    driver_paths = []
    forecasts = []
    diagnostics = []
    statuses = []
    for scenario in scenario_registry:
        driver_path = build_driver_path(macro_base, calibration, scenario, macro_reference)
        run = run_case(driver_path, calibration, history, scenario)
        driver_paths.append(driver_path)
        forecasts.append(run["forecast"])
        diagnostics.append(run["diagnostics"])
        statuses.append(run["status"])
    return {
        "scenario_driver_paths": pd.concat(driver_paths, ignore_index=True),
        "scenario_forecast": pd.concat(forecasts, ignore_index=True),
        "scenario_diagnostics": pd.concat(diagnostics, ignore_index=True),
        "scenario_status": pd.concat(statuses, ignore_index=True).sort_values("scenario_order"),
    }


def package_comparison_tables(named_results):
    forecast = named_results["scenario_forecast"].copy()
    status = named_results["scenario_status"].copy()
    comparison_long = (
        forecast[["scenario", "Period", "revenue", "ebit", "net_income", "CFO", "CFI", "CFF", "cash_end", "short_term_debt", "NetDebt_to_EBITDA"]]
        .melt(id_vars=["scenario", "Period"], var_name="metric", value_name="value")
        .merge(status[["scenario", "status"]], on="scenario", how="left")
        .sort_values(["scenario", "Period", "metric"])
        .reset_index(drop=True)
    )
    comparison_long["units"] = comparison_long["metric"].map(
        {
            "revenue": "USD mm",
            "ebit": "USD mm",
            "net_income": "USD mm",
            "CFO": "USD mm",
            "CFI": "USD mm",
            "CFF": "USD mm",
            "cash_end": "USD mm",
            "short_term_debt": "USD mm",
            "NetDebt_to_EBITDA": "x",
        }
    )

    latest = (
        forecast.sort_values(["scenario_order", "calendar_quarter"])
        .groupby("scenario")
        .tail(1)
        .loc[:, ["scenario", "Period", "revenue", "ebit", "cash_end", "short_term_debt", "NetDebt_to_EBITDA"]]
        .rename(
            columns={
                "Period": "comparison_latest_period",
                "revenue": "comparison_latest_revenue",
                "ebit": "comparison_latest_ebit",
                "cash_end": "comparison_latest_cash_end",
                "short_term_debt": "comparison_latest_short_term_debt",
                "NetDebt_to_EBITDA": "comparison_latest_net_debt_to_ebitda",
            }
        )
        .reset_index(drop=True)
    )

    comparison_summary = status.merge(latest, on="scenario", how="left")
    latest_pivot = comparison_summary.set_index("scenario")[
        [
            "latest_period",
            "comparison_latest_revenue",
            "comparison_latest_ebit",
            "ending_cash",
            "max_short_term_debt",
            "max_net_debt_to_ebitda",
            "status",
        ]
    ]
    return {
        "comparison_long": comparison_long,
        "comparison_summary": comparison_summary,
        "latest_pivot": latest_pivot,
    }


def run_sensitivity_grid(history, calibration, macro_base, macro_reference, oil_shocks=None, private_consumption_shocks=None):
    if oil_shocks is None:
        oil_shocks = np.array([-0.15, -0.10, -0.05, 0.00, 0.05, 0.10, 0.15])
    if private_consumption_shocks is None:
        private_consumption_shocks = np.array([-1.00, -0.50, -0.25, 0.00, 0.25, 0.50, 1.00])

    opening_short_term_debt = calibration["opening_short_term_debt"]
    records = []

    for pc_shock in private_consumption_shocks:
        for oil_shock in oil_shocks:
            scenario = ScenarioConfig(
                name=f"pc_{pc_shock:+.2f}__oil_{oil_shock:+.2%}",
                overrides={
                    "oil_pct_shock": float(oil_shock),
                    "private_consumption_pp_shock": float(pc_shock),
                    "capex_multiplier": 1.0,
                    "dividend_multiplier": 1.0,
                    "buyback_multiplier": 1.0,
                },
                notes="Two-way oil x private-consumption sensitivity run.",
                cash_floor=1200.0,
                order=0,
            )
            driver_path = build_driver_path(macro_base, calibration, scenario, macro_reference)
            run = run_case(driver_path, calibration, history, scenario)
            forecast = run["forecast"]
            status = run["status"].iloc[0]
            records.append(
                {
                    "private_consumption_pp_shock": float(pc_shock),
                    "oil_pct_shock": float(oil_shock),
                    "min_cash": float(forecast["cash_end"].min()),
                    "ending_cash": float(forecast["cash_end"].iloc[-1]),
                    "max_short_term_debt": float(forecast["short_term_debt"].max()),
                    "incremental_revolver_draw": float(max(forecast["short_term_debt"].max() - opening_short_term_debt, 0.0)),
                    "max_net_debt_to_ebitda": float(forecast["NetDebt_to_EBITDA"].max()),
                    "status": status["status"],
                }
            )

    sensitivity_results = pd.DataFrame(records).sort_values(["private_consumption_pp_shock", "oil_pct_shock"])
    min_cash_pivot = sensitivity_results.pivot(index="private_consumption_pp_shock", columns="oil_pct_shock", values="min_cash")
    incremental_revolver_pivot = sensitivity_results.pivot(index="private_consumption_pp_shock", columns="oil_pct_shock", values="incremental_revolver_draw")
    ending_cash_pivot = sensitivity_results.pivot(index="private_consumption_pp_shock", columns="oil_pct_shock", values="ending_cash")
    return {
        "sensitivity_results": sensitivity_results,
        "min_cash_pivot": min_cash_pivot,
        "incremental_revolver_pivot": incremental_revolver_pivot,
        "ending_cash_pivot": ending_cash_pivot,
    }


def solve_max_oil_shock_without_incremental_revolver(history, calibration, macro_base, macro_reference, low=0.0, high=2.0, tolerance=1e-4, max_iter=40):
    opening_short_term_debt = calibration["opening_short_term_debt"]

    def evaluate(oil_shock):
        scenario = ScenarioConfig(
            name="threshold_eval",
            overrides={
                "oil_pct_shock": float(oil_shock),
                "private_consumption_pp_shock": 0.0,
                "capex_multiplier": 1.0,
                "dividend_multiplier": 1.0,
                "buyback_multiplier": 1.0,
            },
            notes="Threshold evaluation run.",
            cash_floor=1200.0,
            order=0,
        )
        driver_path = build_driver_path(macro_base, calibration, scenario, macro_reference)
        run = run_case(driver_path, calibration, history, scenario)
        forecast = run["forecast"]
        max_short_term_debt = float(forecast["short_term_debt"].max())
        return {
            "oil_shock": oil_shock,
            "max_short_term_debt": max_short_term_debt,
            "incremental_draw": max_short_term_debt - opening_short_term_debt,
            "min_cash": float(forecast["cash_end"].min()),
        }

    low_eval = evaluate(low)
    high_eval = evaluate(high)
    status = "solved"
    threshold_oil_shock = np.nan
    threshold_eval = high_eval

    if low_eval["incremental_draw"] > 1e-6:
        status = "binding_at_low_bound"
        threshold_oil_shock = low
        threshold_eval = low_eval
    elif high_eval["incremental_draw"] <= 1e-6:
        status = "not_binding_within_search_range"
        threshold_oil_shock = np.nan
        threshold_eval = high_eval
    else:
        lo = low
        hi = high
        lo_eval = low_eval
        for _ in range(max_iter):
            mid = (lo + hi) / 2.0
            mid_eval = evaluate(mid)
            if mid_eval["incremental_draw"] <= 1e-6:
                lo = mid
                lo_eval = mid_eval
            else:
                hi = mid
            if hi - lo <= tolerance:
                break
        threshold_oil_shock = lo
        threshold_eval = lo_eval

    front_quarter_base_oil = float(macro_base["oil_price_base"].iloc[0])
    front_quarter_threshold_oil = np.nan if np.isnan(threshold_oil_shock) else front_quarter_base_oil * (1.0 + threshold_oil_shock)

    return pd.DataFrame(
        [
            {
                "constraint": "maximum oil shock without incremental revolver draw above opening short-term debt",
                "solver_status": status,
                "opening_short_term_debt": opening_short_term_debt,
                "oil_pct_shock_threshold": threshold_oil_shock,
                "front_quarter_base_oil_price": front_quarter_base_oil,
                "front_quarter_threshold_oil_price": front_quarter_threshold_oil,
                "max_short_term_debt_at_threshold": threshold_eval["max_short_term_debt"],
                "incremental_draw_at_threshold": threshold_eval["incremental_draw"],
                "min_cash_at_threshold": threshold_eval["min_cash"],
            }
        ]
    )

4. Run the Named Scenarios and Review Diagnostics¶

The named scenarios are now just documented assumption systems. The forecast engine and the diagnostics are shared across all of them.

named_results = run_named_scenarios(
    history=history,
    calibration=calibration,
    macro_base=macro_base,
    scenario_registry=scenario_registry,
    macro_reference=macro_reference,
)
comparison_tables = package_comparison_tables(named_results)

display(named_results["scenario_status"])
display(comparison_tables["latest_pivot"])
display(
    named_results["scenario_diagnostics"][[
        "scenario",
        "Period",
        "hard_fail_count",
        "soft_warning_count",
        "soft_warning_reasons",
        "period_status",
    ]].head(12)
)
scenario scenario_order status notes cash_floor latest_period hard_fail_count soft_warning_count min_cash ending_cash min_cash_headroom ending_short_term_debt max_short_term_debt max_incremental_revolver_draw max_net_debt_to_ebitda max_balance_error_abs
0 base 0 WARN Consensus oil path and consensus private-consu... 1,200.00 FY2028-Q2 0 4 1,229.56 1,229.56 29.56 0.00 0.00 0.00 3.47 60.21
1 upside 1 WARN Lower oil and firmer demand with modestly stro... 1,200.00 FY2028-Q2 0 4 1,200.00 1,200.00 0.00 80.68 80.68 0.00 3.40 62.73
2 downside 2 WARN Higher oil and softer demand with partial buyb... 1,250.00 FY2028-Q2 0 1 1,488.96 1,786.17 238.96 0.00 0.00 0.00 3.04 55.95
3 liquidity_stress 3 WARN Oil spike and sharp demand slowdown with a hig... 1,400.00 FY2028-Q2 0 6 1,442.74 1,687.76 42.74 0.00 0.00 0.00 3.59 51.85
latest_period comparison_latest_revenue comparison_latest_ebit ending_cash max_short_term_debt max_net_debt_to_ebitda status
scenario
base FY2028-Q2 2,481.62 294.05 1,229.56 0.00 3.47 WARN
upside FY2028-Q2 2,585.44 321.61 1,200.00 80.68 3.40 WARN
downside FY2028-Q2 2,306.14 249.05 1,786.17 0.00 3.04 WARN
liquidity_stress FY2028-Q2 2,137.08 208.35 1,687.76 0.00 3.59 WARN
scenario Period hard_fail_count soft_warning_count soft_warning_reasons period_status
0 base FY2026-Q4 0 0 PASS
1 base FY2027-Q1 0 0 PASS
2 base FY2027-Q2 0 2 net debt to EBITDA above 3.0x; cash headroom b... WARN
3 base FY2027-Q3 0 0 PASS
4 base FY2027-Q4 0 0 PASS
5 base FY2028-Q1 0 0 PASS
6 base FY2028-Q2 0 2 net debt to EBITDA above 3.0x; cash headroom b... WARN
7 upside FY2026-Q4 0 0 PASS
8 upside FY2027-Q1 0 0 PASS
9 upside FY2027-Q2 0 2 net debt to EBITDA above 3.0x; cash headroom b... WARN
10 upside FY2027-Q3 0 0 PASS
11 upside FY2027-Q4 0 0 PASS

5. Build the Main Two-Way Sensitivity Table¶

Following Chapter 6, this table varies two exogenous drivers across a grid and reads the result as a response surface.

Chosen axes for this project:

  • oil_price
  • private_consumption_qoq

Most useful outputs for this RL case:

  • minimum cash
  • incremental revolver draw
sensitivity = run_sensitivity_grid(
    history=history,
    calibration=calibration,
    macro_base=macro_base,
    macro_reference=macro_reference,
)

display(sensitivity["min_cash_pivot"])
display(sensitivity["incremental_revolver_pivot"])
display(sensitivity["sensitivity_results"].head(15))
oil_pct_shock -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15
private_consumption_pp_shock
-1.00 1,240.44 1,217.70 1,200.00 1,200.00 1,200.00 1,200.00 1,200.00
-0.50 1,267.83 1,246.54 1,222.49 1,200.00 1,200.00 1,200.00 1,200.00
-0.25 1,274.99 1,260.42 1,238.04 1,213.54 1,200.00 1,200.00 1,200.00
0.00 1,282.66 1,267.77 1,253.03 1,229.56 1,204.61 1,200.00 1,200.00
0.25 1,290.85 1,275.63 1,260.57 1,245.65 1,221.10 1,200.00 1,200.00
0.50 1,299.54 1,284.00 1,268.62 1,253.38 1,238.30 1,212.67 1,200.00
1.00 1,318.44 1,302.27 1,286.24 1,270.37 1,254.65 1,239.07 1,223.11
oil_pct_shock -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15
private_consumption_pp_shock
-1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
private_consumption_pp_shock oil_pct_shock min_cash ending_cash max_short_term_debt incremental_revolver_draw max_net_debt_to_ebitda status
0 -1.00 -0.15 1,240.44 1,240.44 0.00 0.00 3.74 WARN
1 -1.00 -0.10 1,217.70 1,217.70 0.00 0.00 3.86 WARN
2 -1.00 -0.05 1,200.00 1,200.00 4.62 0.00 3.98 WARN
3 -1.00 0.00 1,200.00 1,200.00 26.52 0.00 4.11 WARN
4 -1.00 0.05 1,200.00 1,200.00 48.01 0.00 4.23 WARN
5 -1.00 0.10 1,200.00 1,200.00 69.12 0.00 4.40 WARN
6 -1.00 0.15 1,200.00 1,200.00 89.81 0.00 4.58 WARN
7 -0.50 -0.15 1,267.83 1,271.02 0.00 0.00 3.44 WARN
8 -0.50 -0.10 1,246.54 1,246.54 0.00 0.00 3.55 WARN
9 -0.50 -0.05 1,222.49 1,222.49 0.00 0.00 3.66 WARN
10 -0.50 0.00 1,200.00 1,200.00 1.13 0.00 3.78 WARN
11 -0.50 0.05 1,200.00 1,200.00 24.33 0.00 3.89 WARN
12 -0.50 0.10 1,200.00 1,200.00 47.11 0.00 4.01 WARN
13 -0.50 0.15 1,200.00 1,200.00 69.49 0.00 4.14 WARN
14 -0.25 -0.15 1,274.99 1,288.34 0.00 0.00 3.30 WARN

6. Solve the Threshold Result¶

The threshold is framed the Chapter 6 way: it is a conditional break-even result, not an unconditional truth.

Question used here:

What is the maximum oil shock the model can absorb without requiring incremental revolver draw above the opening short-term debt balance?

threshold_result = solve_max_oil_shock_without_incremental_revolver(
    history=history,
    calibration=calibration,
    macro_base=macro_base,
    macro_reference=macro_reference,
)

display(threshold_result)
constraint solver_status opening_short_term_debt oil_pct_shock_threshold front_quarter_base_oil_price front_quarter_threshold_oil_price max_short_term_debt_at_threshold incremental_draw_at_threshold min_cash_at_threshold
0 maximum oil shock without incremental revolver... solved 246.70 0.61 100.94 162.69 246.69 -0.01 1,200.00

7. Export the Chapter 6 Output Pack¶

The export pack keeps the long-format run records, diagnostics, comparison tables, sensitivity tables, and the threshold result together in the outputs folder.

def prepare_export(df):
    out = df.copy()
    for col in out.columns:
        if pd.api.types.is_period_dtype(out[col]):
            out[col] = out[col].astype(str)
    numeric_cols = out.select_dtypes(include=[np.number]).columns
    out[numeric_cols] = out[numeric_cols].round(4)
    return out


export_tables = {
    "chapter6_ch5_anchor": baseline_anchor,
    "chapter6_macro_base_assumptions": macro_base,
    "chapter6_quarter_profiles": quarter_profile_display,
    "chapter6_macro_translation_table": translation_table,
    "chapter6_scenario_definitions": scenario_registry_df,
    "chapter6_scenario_assumptions": named_results["scenario_driver_paths"],
    "chapter6_scenario_forecast": named_results["scenario_forecast"],
    "chapter6_scenario_diagnostics": named_results["scenario_diagnostics"],
    "chapter6_scenario_status": named_results["scenario_status"],
    "chapter6_comparison_long": comparison_tables["comparison_long"],
    "chapter6_comparison_summary": comparison_tables["comparison_summary"],
    "chapter6_sensitivity_grid": sensitivity["sensitivity_results"],
    "chapter6_sensitivity_min_cash_pivot": sensitivity["min_cash_pivot"].reset_index(),
    "chapter6_sensitivity_incremental_revolver_pivot": sensitivity["incremental_revolver_pivot"].reset_index(),
    "chapter6_sensitivity_ending_cash_pivot": sensitivity["ending_cash_pivot"].reset_index(),
    "chapter6_oil_threshold_result": threshold_result,
}

written_files = []
for name, table in export_tables.items():
    path = OUT_DIR / f"{name}.csv"
    prepare_export(table).to_csv(path, index=False)
    written_files.append({"table": name, "path": str(path)})

written_files = pd.DataFrame(written_files)
display(written_files)
/var/folders/8x/vkwznvmj4yz8h80gpzp978rr0000gn/T/ipykernel_72648/3874024362.py:4: DeprecationWarning: is_period_dtype is deprecated and will be removed in a future version. Use `isinstance(dtype, pd.PeriodDtype)` instead
  if pd.api.types.is_period_dtype(out[col]):
/var/folders/8x/vkwznvmj4yz8h80gpzp978rr0000gn/T/ipykernel_72648/3874024362.py:4: DeprecationWarning: is_period_dtype is deprecated and will be removed in a future version. Use `isinstance(dtype, pd.PeriodDtype)` instead
  if pd.api.types.is_period_dtype(out[col]):
/var/folders/8x/vkwznvmj4yz8h80gpzp978rr0000gn/T/ipykernel_72648/3874024362.py:4: DeprecationWarning: is_period_dtype is deprecated and will be removed in a future version. Use `isinstance(dtype, pd.PeriodDtype)` instead
  if pd.api.types.is_period_dtype(out[col]):
/var/folders/8x/vkwznvmj4yz8h80gpzp978rr0000gn/T/ipykernel_72648/3874024362.py:4: DeprecationWarning: is_period_dtype is deprecated and will be removed in a future version. Use `isinstance(dtype, pd.PeriodDtype)` instead
  if pd.api.types.is_period_dtype(out[col]):
/var/folders/8x/vkwznvmj4yz8h80gpzp978rr0000gn/T/ipykernel_72648/3874024362.py:4: DeprecationWarning: is_period_dtype is deprecated and will be removed in a future version. Use `isinstance(dtype, pd.PeriodDtype)` instead
  if pd.api.types.is_period_dtype(out[col]):
table path
0 chapter6_ch5_anchor /Users/theomachado/Library/CloudStorage/OneDri...
1 chapter6_macro_base_assumptions /Users/theomachado/Library/CloudStorage/OneDri...
2 chapter6_quarter_profiles /Users/theomachado/Library/CloudStorage/OneDri...
3 chapter6_macro_translation_table /Users/theomachado/Library/CloudStorage/OneDri...
4 chapter6_scenario_definitions /Users/theomachado/Library/CloudStorage/OneDri...
5 chapter6_scenario_assumptions /Users/theomachado/Library/CloudStorage/OneDri...
6 chapter6_scenario_forecast /Users/theomachado/Library/CloudStorage/OneDri...
7 chapter6_scenario_diagnostics /Users/theomachado/Library/CloudStorage/OneDri...
8 chapter6_scenario_status /Users/theomachado/Library/CloudStorage/OneDri...
9 chapter6_comparison_long /Users/theomachado/Library/CloudStorage/OneDri...
10 chapter6_comparison_summary /Users/theomachado/Library/CloudStorage/OneDri...
11 chapter6_sensitivity_grid /Users/theomachado/Library/CloudStorage/OneDri...
12 chapter6_sensitivity_min_cash_pivot /Users/theomachado/Library/CloudStorage/OneDri...
13 chapter6_sensitivity_incremental_revolver_pivot /Users/theomachado/Library/CloudStorage/OneDri...
14 chapter6_sensitivity_ending_cash_pivot /Users/theomachado/Library/CloudStorage/OneDri...
15 chapter6_oil_threshold_result /Users/theomachado/Library/CloudStorage/OneDri...

The Chapter 6 notebook is now cleaner and more automated.

Key guardrails preserved from the lecture PDF:

  • the scenarios still live in a configuration layer,
  • all cases still use one shared execution path,
  • diagnostics stay attached to the results,
  • the two-way table is still built from explicit exogenous drivers,
  • the threshold result is still reported as a conditional model output.

What changed in this version:

  • the manual private-consumption table was removed from the main path,
  • the local oil spreadsheets were removed from the main path,
  • the macro base now comes from the API payload,
  • the current baseline rule is a flat carry-forward from the latest API actuals.