FAQ

Frequently asked questions

Straight answers for on-prem AI, deployment, costs, and governance.

How this differs from online AI

Why is there no cloud trial? +

ModelLinq is built for on-prem deployment, which requires access, data, and governance setup. We provide online meeting demos or PoC.

Why not emphasize model size? +

Model size is not the only performance factor. Governance-driven orchestration and task success within security and cost constraints matter more.

Is on-prem AI weaker? +

Our focus is compliance and fit for target industries, with governed workflows, data protection, and controlled outcomes.

Can we use our own models? +

Support is limited and depends on model size and available server compute.

Cost and deployment concerns

Do we have to buy hardware? +

You can use existing servers or procure new ones. We help assess specifications with integration partners.

How is cost calculated? +

Costs are based on project delivery and maintenance contracts. Pricing is fixed and predictable, not usage-based.

Is it suitable for SMBs? +

If you need high security or predictable costs, smaller teams can start with a right-sized deployment.

How long does deployment take? +

It depends on scope and environment. We often start with a small PoC and then expand gradually.

Security and governance

Will data leak out? +

Data stays on-premises, with access control and masking to reduce leakage risk.

Is there auditing? +

Every model change, inference, and usage action is fully auditable.

Can we restrict model sources and languages? +

Yes. You can whitelist model sources, versions, and allowed languages to match internal policies.

Can it run in air-gapped networks? +

Yes. ModelLinq supports deployment in isolated environments with no external connectivity.

Still have questions?

Tell us about your environment and we will map the right on-prem plan for your team.