Initial situation: Uncoordinated AI initiatives despite high pressure to innovate
An IT service provider in the financial sector was confronted with a typical challenge of digital transformation: Everyone was talking about artificial intelligence, and various teams in the company were experimenting with AI technologies. However, there was a lack of central coordination and uniform procedures.
The central pain points:
- Decentralized wild growth innovation: Many teams developed AI solutions in parallel without knowing each other or benefiting from joint experiences
- Lack of standardization: No uniform methods for evaluating and developing AI use cases
- Waste of resources: Teams “ran in the wrong directions” and invested time in approaches that had already been solved or impracticable
- Lack of expertise: Lack of central AI expertise led to sub-optimal technical decisions
- Complex data protection requirements: As a banking IT service provider, the company was subject to strict regulatory requirements, which required special precautions for cloud-based AI solutions
Business context and urgency:
The customer realized that AI is not a short-term hype, but a strategic necessity. Without a coordinated approach, the company threatened to fall behind in competition and waste valuable resources. At the same time, every AI initiative had to meet the highest security and data protection standards. The use of external cloud AI services further exacerbated this challenge.
The central question was: How do we create a central point of contact for AI topics that promotes innovation, sets standards and at the same time meets regulatory requirements?
Our solution: A central AI unit as a catalyst for AI excellence
Together with the customer, AMAI developed a central AI unit, an internal AI laboratory that serves as a central point of contact for all AI initiatives in the company. The unit coordinates decentralized initiatives, develops prototypes and hands over validated solutions to specialist areas for production implementation.
The systematic workflow:
- Use case identification: Teams report AI ideas to the central AI unit
- Use case workshop: Structured evaluation of business relevance and technical feasibility
- Business case workshop: Detailed evaluation of costs, benefits and resource requirements
- PoC development: Rapid prototype development with modern AI technologies
- Handover to departments: Transfer of the validated solution for production development
In the course of the project, the central AI unit developed from a pure prototype workshop to Enablement partners, which supports specialist teams with expertise and enables them to develop AI solutions independently.
Hybrid cloud architecture with privacy-first approach
The central AI unit uses a hybrid architecture that combines the benefits of modern cloud AI services with the strict data protection requirements of the financial sector. The AI technology stack used comprises Azure OpenAI with GPT-4 and GPT-3.5 for advanced NLP tasks, local LLMs (Llama models) for particularly sensitive applications, LangGraph for developing intelligent agents, and DataBricks as a central machine learning platform.
The biggest technical challenge was reconciling cloud AI services with the financial sector's data protection requirements. The team therefore developed a multi-stage Anonymization service, which processes sensitive data locally before it is transferred to cloud services. The service recognizes and masks personal data such as names, email addresses and addresses, enables reversible anonymization for authorized users and creates transparency through documented data protection processes. This allows the use of powerful cloud models while complying with all regulatory requirements.
In addition, a AI expert advice established, consisting of three AI experts and three works council members, which approves all data accesses for AI projects. This unique governance approach ensures responsible use of AI and creates trust within the company.
Agent architectures and intelligent automation
For complex tasks such as code migration from mainframe systems, the team developed intelligent agent architectures using LangGraph. These agents analyze old code, make architectural decisions, generate modern code, and document differences. The architecture combines cloud services for powerful, scalable models with on-premise deployment for full data control. Local LLMs serve as a fallback for highly sensitive data.
Integration and deployment
All applications are containerized and run on a company-internal OpenShift-Platform deployed. From the outset, the solutions are developed in various environments (development, test, production), which enables rapid iteration in the PoC phase, consistent deployments and easy transfer to specialist areas.
The solution follows the principle Integration instead of standalone. AI features are integrated directly into existing business tools instead of creating separate applications. This increases acceptance and makes solutions more sustainable.
Successful use cases:
- ChatGPT internal tool: Company-wide access to large language models with data protection
- Ticket clustering and categorization: Automatic classification of support tickets to improve efficiency
- Code migration solutions: AI-powered support to replace mainframe systems
- SharePoint chatbot: Intelligent search in corporate documents
Measurable success: From chaos to orchestration
The central AI unit fundamentally transformed the company's AI landscape:
Operational excellence
- Successful launch of an internal ChatGPT tool very well received by employees, which is now the most used AI tool in the company
- Several successful project handovers to departments that are now independently bringing AI solutions into production
- Reducing duplication of work through central coordination and knowledge sharing
- Accelerated time-to-PoC through standardized workflows and reusable components
Regulatory compliance and data protection
- 100% data protection compliance through local anonymization before cloud use
- Establishing an AI governance model with expert advice and works council participation
- Transparent data access processes create trust among employees and management
- Successful use of cloud AI services despite the strictest regulatory requirements
Technological innovation
- Migrating from traditional ML approaches to large language models for NLP tasks
- Innovative agent architectures for complex automation tasks (e.g. code migration)
- hybrid cloud strategy as best practice for regulated industries
- Container-native development as a standard for all AI projects
Strategic perspective
- From prototypes to products: Several use cases went through the entire cycle from POC to handover to specialist areas
- Development of internal AI expertise: Teams learn from AMAI experts and develop their own expertise
- Strategic realignment: From pure prototype development to an enablement partner for the entire company
- Measurable ROI: Success is measured by user acceptance (number of users of the internal language model) and initiated departmental projects








