Tailored for Enterprises
Transform your business processes with AI through multi-language model support, a secure and scalable architecture, and a cost-optimization infrastructure.
ATP AiX is a next-generation platform we have developed to enable organizations to benefit from generative AI technologies in a flexible, secure, and cost-effective way. ATP AiX supports large language models (LLMs) such as ChatGPT, Deepseek, and Gemini, as well as organization-specific, custom-trained models. It provides capabilities such as data querying, analysis, and building AI assistants through a single interface, delivering a user-friendly experience.
ATP AiX can also run locally, without internet access, in scenarios where this is required. This ensures that an organization’s confidential information and documents do not leave the organization. ATP AiX delivers the opportunities enabled by AI without compromising security.
ATP AiX is offered together with a hardware infrastructure for organizations that do not want to invest in a dedicated data center for AI, or that cannot use cloud services due to regulatory constraints. AiX Mini PC, powered by AMD’s Ryzen AI Max processor, brings AI-accelerated computing power to the desktop level. Equipped with liquid cooling technology, this mini workstation delivers high performance for both professional users and creative teams.
Moreover, ATP AiX offers all these advantages at the lowest possible costs. Instead of per-user licensing, ATP AiX is built on a centralized token pool management infrastructure. This allows organizations to manage resources more efficiently based on usage needs and reduce licensing costs.
In summary, ATP AiX is an ideal solution for organizations looking to integrate AI into their business processes and accelerate their AI-driven transformation goals.
Key Features
Full support for the latest LLMs.
Enterprise-grade data security.
AiX Mini PC performance.
Easy AI assistant creation.
Token-based cost savings.
High performance, quiet operation, and a compact design—all in one!
The AiX Mini PC, powered by AMD’s Ryzen AI Max processor, brings AI-accelerated computing power to the desktop level. Equipped with liquid-cooling technology, this compact workstation delivers flawless performance for both professional users and creative teams. With its quiet, efficient, and long-lasting design, it prepares your organization today for the AI workloads of the future.
- AMD Ryzen AI Max Processor
- 128 GB LPDDR5x Memory
- Liquid Cooling System
- 400W Power Supply Unit (PSU)
- 2.5 & 10 GbE LAN Connectivity
ATP AiX’s Advanced AI Capabilities
The AIX Mini PC supports nearly 500 open-source and closed-source AI models. Below are some of the standout options that run on the AIX Mini PC, featuring the highest benchmark scores:
OPENAI – GPT-OSS-20B (21B): Offers powerful text understanding and generation capabilities for large-scale enterprise scenarios.
ELEUTHER AI – GPT-NeoX-20B (20B): Provides balanced performance for research, data analysis, and natural language processing projects.
META – Vicuna-13B (LLaMA-based, 13B): An LLM optimized for chat, assistant, and customer interaction scenarios.
MISTRAL – Mistral-7B (7B): Stands out in applications requiring fast responses and low latency due to its lightweight structure.
MISTRAL – Mixtral-8x7B (47B MoE): Delivers high accuracy and efficiency across multiple workloads using the Mixture-of-Experts (MoE) architecture.
MISTRAL – Mixtral-8x22B (141B MoE): Provides a scalable infrastructure for complex enterprise scenarios with performance that rivals even larger models.
ALIBABA – Qwen-14B (14B): Offers a strong alternative for multilingual use, text generation, and corporate applications.
META – Llama-4 Scout (MoE): Designed to produce high-quality output across extensive datasets using the next-generation LLaMA architecture.
BAE SYSTEMS – Falcon-40B (TII, 40B): Meets the needs for in-depth text analysis and content generation in large-scale projects.
BAE SYSTEMS – Falcon-7B (TII, 7B): Provides a balanced solution between performance and resource usage with its more compact structure.
Note: The AIX Mini PC can support hundreds of models beyond these; this list represents featured models with high benchmark scores.
ast Integration: Seamless compatibility with the OpenAI Assistants API.
Task-Based Expert Bots: Design and deploy specialized bots focused on specific departments or roles.
End-to-End Smart Workflows: Advanced capabilities for code execution and file reading to automate complex processes.
Script-Based Automation: Accelerate repetitive tasks and reduce error rates through automated scripting.
RAG-Based Knowledge Assistant: Fed by corporate documents for precise internal data retrieval.
Source-Grounded Responses: Finds relevant content first, then generates answers based on specific documents.
Minimized Hallucination: Paragraph-level source visibility ensures high accuracy and transparency.
Multi-Format Vectorization: Supports PDF, Word, Excel, and image files.
Interactive Chat: Drag-and-drop file uploads for instant summarization, chunking, and Q&A.
Integrated Preview: Interactive document reading experience within the interface.
Third-Party Integration: Seamless connection with SharePoint and other external sources.
Authorized Access: Controlled document sharing and secure usage via role-based permissions.
Natural Language to SQL: Automatically generate SQL queries from plain language (e.g., “What was our revenue last month?”).
Automatic Query Optimization: The system automatically optimizes generated SQL for better performance.
Flexible Data Export: Export results instantly as tables, JSON, CSV, or charts.
Database Connectivity: Native PostgreSQL integration; MySQL, MSSQL, and Snowflake are on the upcoming roadmap.
Rapid Deployment: Fast implementation with TP’s ready-to-use infrastructure.
Custom Configuration: Institution-specific models, API keys, token limits, and capability settings.
Centralized Admin Panel:
User permissions and authorization
Token limits and usage reports
Model-based statistics and activity tracking
Multi-Model Support: Integration with OpenAI (GPT), Claude, Gemini, DeepSeek, Mistral, LLaMA-3, and more.
Proprietary Model Support: Capability to run the institution’s own custom or fine-tuned models.
API & Token Management: Define unique API keys for each model and manage tokens via the interface.
Dynamic Context Management: Enhanced response consistency and token efficiency.
Enterprise Identity & Access: Integration with Azure AD, Okta, Google Workspace, LDAP, and SSO.
Robust Security: High-level protection with AES-256, TLS 1.3, 3-key control, and granular authorization.
Usage Analytics: Query history, token usage, and resource consumption analytics at the user or team level.