A family of AI Models designed to drive business impact, combining efficiency, low power consumption, and adaptability across vertical sectors. Built on deep expertise in language technologies and AI, Velvet translates advanced research and real-world experience into targeted, high-impact solutions.
What is Velvet?
Velvet is a family of multilingual Large Language Models designed and developed entirely in Italy, with a strong focus on data governance, built with careful attention to the European regulatory framework. It is integrated into ready-to-use applications, easily customizable to specific needs, and available for deployment in the cloud, on major cloud providers, and on-premises.
All models deliver SOTA performance relative to their scale and operating costs. While maintaining a strong commitment to open-source development, the enterprise hosted models are designed to bring the best capabilities and performances into more controlled environments and specialized domains. Velvet open-weight models are made available under the Apache 2.0 license.
Main Features
Trustworthy
Designed for use in corporate and public sector environments, Velvet aligns with national and European regulations. Special attention has been given to the curation of training data to minimize bias and harmful content. Additionally, various mechanisms have been implemented to address privacy concerns, ensuring secure and responsible deployment.
1
Lightweight
Velvet maximizes performance with light infrastructural use. Effective, but with low consumption, it can also be used on small infrastructures with latest generation GPU processors, containing carbon footprint, energy consumption, as well as reducing training and operating costs in extended contexts. Velvet is ready for use in the cloud or on premises.
2
Agile
Velvet is built with advanced text comprehension capabilities and sophisticated functions, making it ideal for performing specific tasks across various industries, including Healthcare, Public Administration, Security, Finance, and Mobility. The native integration with AIWave ensures a seamless and accessible distribuiton.
3
Velvet Approach
To address the key challenges in developing a large language model, Velvet integrates mechanisms that ensure ethical behavior, fair data handling, and scalability.
Cultural Fit
Unlike models primarily trained on English-language content and translated text, Velvet’s training dataset has been carefully balanced across multiple languages. For instance, in Velvet 14B, 23% of the data consists of content originally written in Italian. This approach ensures that the generated outputs more accurately reflect the cultural differences and nuances inherent in the languages represented in the training data.
Model Adaptation
Thanks to an efficient architecture, models can be optimized and fine-tuned for targeted tasks or specific hardware requirements.
Bias Mitigation
Specific methods have been developed and integrated into the training process to minimize cultural, ethical, hate, and gender biases.
Data Curation
Training data is carefully selected and cleaned to minimize toxicity and inappropriate content. Velvet’s data lineage is fully traceable, ensuring full knowledge of its origins.
Ethical Assessment
Ethical compliance is verified against OECD and WHO standards to ensure the model operates responsibly and adheres to international guidelines.
Opt-out Algorithm
PAE (Privacy Association Editing) is a proprietary algorithm that removes sensitive information directly from the model when necessary, without the need for retraining. In practice, if a user requests data removal, the opt-out algorithm guarantees the data is securely deleted without affecting the model’s performance.
Business Adoption
Risk Mitigation
Thanks to its balanced training architecture and model design, Velvet does not fall into the category of systemic risk models as defined in the AI Act, relieving deploying organizations of significant management responsibilities.
Seamless Integration
Although Velvet is a general-purpose LLM family, its native integration with AIWave enables it to be immediately available for use within ready-to-use applications, designed for various sectors and to manage complex tasks.
Open Source vs Hosted
The open weights models allow the community to enhance their capabilities and adapt them to a variety of specific tasks, while enterprise models are designed for more controlled environments.
Real-world Applications
Used for real-world applications by actual companies since the beta testing phase, it is designed to be intuitive, cost-effective, and deliver reliable results.
Governance and Internal Oversight
Ethical and Technical Committee
The supervision by an ethical and technical committee ensures that the model aligns with principles of transparency, fairness, and safety.
Bias Monitoring
The alignment with ethical guidelines is guaranteed by continuous bias monitoring through auditing tools and iterative updates.
Commercial and Institutional Usage
Restrictions on commercial and institutional usage enhance compliance with regulatory frameworks and shared responsibility principles.
Periodic Review
Periodic reviews assess the model’s impact in high-risk applications.
Code of Practice
Almawave also voluntarily complies with the European GPAI code of practice and is certified to ISO 42001 for the management of AI systems and models.
Partner Ecosystem
The Velvet models are also the result of numerous ongoing collaborations with the academic and research world, including Tor Vergata University, Bruno Kessler Foundation, La Sapienza University, University of Catania and University of Bari.
SIpEIA (Italian Society for Ethics in Artificial Intelligence) was also involved for ethical and regulatory compliance.
Almawave is a member of the Italian consortium Innovate, led by Cineca, that will build the first industry-grade supercomputer funded by EuroHPC.