Explore Images

Custom LLMs: Arabic-Aware, Domain-Tuned

Generic language models often struggle with domain-specific Arabic due to dialect complexity, script variations, and tokenization challenges. At WosoM, we design LLMs that are purpose-built for industries like law, health, finance, or education — in Modern Standard Arabic or dialectal forms.

1. Identifying Domain-Specific Needs

Whether it's legal opinion drafting or conversational health triage, we start by working with stakeholders to define tasks, tone, accuracy metrics, and language forms. This scoping shapes model objectives and dataset sourcing.

Use CaseA Gulf-based legal firm required a model to generate contracts in Arabic with contextual clause suggestions. We trained the model on 120,000+ documents sourced from real court judgments and legal texts.
2. Pre-Training Corpus Construction

We assemble a multilingual corpus with a strong Arabic backbone, including public texts, internal records, scraped content, and verified translations. Each file is cleaned, segmented, and aligned by domain.

  • Governmental reports & policies
  • Medical encyclopedias in Arabic
  • Court transcripts & contract clauses
  • Academic journals and theses
3. Tokenization & Language Embeddings

Arabic tokenization requires morphological awareness. We apply custom BPE or SentencePiece tokenizers that preserve diacritics and root forms, ensuring efficient vocabulary coverage.

python train_tokenizer.py --input corpus.txt --vocab_size 32000 --lang ar
4. Base Model Architecture

Our models are built using decoder-only transformers optimized for fast inference and fine-tuning. We adopt open-source base layers (like GPT-2 or MPT) and customize them with LoRA or PEFT for efficient Arabic adaptation.

You don’t need 175B parameters. You need 4B well-trained ones on the right text, in the right dialect, for the right task.
5. Training & Fine-Tuning

We train models from scratch or continue from checkpoints, using domain-specific prompts and reinforcement learning with human feedback (RLHF) when needed. Our pipelines are cloud-optimized and GPU-efficient.

accelerate launch train.py --config arabic-legal-config.yaml
6. Evaluation & Bias Audits

Each model is tested on comprehension, generation quality, and ethical safety. We run hallucination detection, toxicity tests, and dialect understanding metrics to avoid cultural or political missteps.

InsightGeneric LLMs often misinterpret religious or cultural idioms in Arabic. WosoM's models are reviewed by native-speaking experts from the target region.
7. Deployment & Custom Interfaces

Our LLMs are deployed via API or embedded into custom applications. Whether it’s a chatbot, summarizer, or smart document editor — each interface is tailored for end-user fluency and speed.

LLM Modeling Pipelineimage credit: SciForce on Medium
Arabic-first AI is not just a translation — it's a transformation of how machines understand culture, nuance, and intent.