Mexico's first national-scale LLM. Built in collaboration with the Mexican government. Validated by NVIDIA.
KAL is a production-grade large language model built in collaboration with the Mexican government and validated by NVIDIA. It is the first LLM of its scale developed in Mexico, for Mexico — operating under Mexican law, hosted on infrastructure inside Mexican borders, governed by a compliance framework Mexican institutions can read and defend.
What KAL is not: an import, a translation, a wrapper. What KAL is: a sovereign AI capability for Mexican institutions, Mexican data, and Mexican decisions.
Every regulated economy will eventually need an AI infrastructure layer it controls. The alternative is outsourcing the governance of its most sensitive data and its most critical decisions to foreign platforms operating under foreign law. For Mexico, that conversation is not theoretical. It is a question of whether the country's most important AI workflows run under CNBV, CNSF, and CONDUSEF, or under another continent's legal regime.
KAL exists because Saptiva AI, the Mexican government, and NVIDIA agreed that Mexico should not be a tenant in someone else's AI infrastructure. It should be a principal.
KAL is not a Saptiva AI product alone. It is the result of a three-party collaboration.
Saptiva AI is the platform layer: the orchestration, the compliance surface, the FrIdA policy engine that governs how KAL can be accessed and by whom. NVIDIA is the validated infrastructure. The Mexican government is the jurisdictional principal. The three together are what makes KAL a real piece of sovereign infrastructure rather than a vendor-branded model demo.
KAL is available to Mexican institutions through Saptiva AI — orchestrated by FrIdA, deployed on NVIDIA-validated infrastructure inside Mexican borders, under a compliance policy written for and by Mexican regulators.
Mexican institutions access KAL through Saptiva AI, with FrIdA enforcing the same residency, compliance, and audit posture that governs every other production workload on the platform. KAL is a model. Saptiva AI is the layer that makes it deployable. NVIDIA is the infrastructure that makes it fast.
Official statement from Mexican government and NVIDIA partners — to be added following joint publication approval.
KAL is the first public evidence of a broader pattern: Latin America is not going to run its AI on someone else's infrastructure. The same three-party structure — sovereign government, validated compute, Saptiva AI platform — is the template for what comes next across the region.
The next countries are already in conversation. The next institutional use cases — regulated enterprise, higher education, public services — are already being scoped. KAL is where the template started. It is not where it ends.
Government program, regulated enterprise, or institutional principal asking the same question Mexico asked — under whose jurisdiction does our AI run? A Forward Deployed Engineer will respond within 48 hours.
Request a conversation →