Skip to main content
محمود الزلط - مهندس ذكاء اصطناعي

محمود الزلط

مهندس ذكاء اصطناعي ومستشار إستراتيجي

أحوّل الذكاء الاصطناعي إلى نتائج ملموسة

تحتاج إرشادًا للاستفادة من الذكاء الاصطناعي؟

للمؤسسين والفرق الذين يريدون وصولًا مباشرًا إلى توجيه خبير.

استراتيجية ذكاء اصطناعي عملية ومعمارية قابلة للتوسع.

  • مواءمة استراتيجية الذكاء الاصطناعي مع أهداف العمل والعائد
  • تصميم معمارية النظام وخطط التنفيذ
  • البقاء مشاركًا حتى تتحرك القرارات الحساسة
التحدث في مؤتمر تقني

مشاركة المعرفة

تعزيز التميز في هندسة البرمجيات

16+

سنوات الخبرة

98

مشاريع منجزة

٢٠٠+

توجيه مهندسين

٥٠+

دعم شركات

~/zalt/mantra.life
Live
0
1
0
1
1
0
1
2
3
4
5
Spaces: 2100%

من أنا

قصتي وخلفيتي.

المصدر المفتوح

طوّرت Laradock.io (بيئة تطوير افتراضية تجاوزت 2.5 مليون تنزيل)، وإطار Apiato لبناء واجهات API معيارية، وهيكلة Porto SAP إلى جانب مجموعة من الأدوات مفتوحة المصدر المصممة لتسريع تطوير البرمجيات وتبسيط مسارات العمل للفرق الهندسية. وقد صُنِّفت من قبل Laravel Daily ضمن قائمة أبرز 10 مهندسين يُنصَح بمتابعتهم عبر الإنترنت.

عندما يتعلق الأمر بالتكنولوجيا، كل شيء يتمحور حول التعلم بالممارسة.
لن تكتشف كل شيء فورًا، لكن كلما تفاعلت أكثر، تطورت أكثر.

بناء أنظمة AI
التحدث في مؤتمر

أبرز ما بنيته

نماذج من مشاريع استمتعت بالعمل عليها.

AI Agents Workforce - Enterprise Platform, Agent Orchestration project screenshot
98

AI Agents Workforce

Enterprise Platform, Agent Orchestration

January 2026

An enterprise-grade multi-tenant SaaS platform for deploying and managing AI agents at scale. Agents execute on a durable workflow engine with long-term memory backed by knowledge graphs and vector search, connect to 900+ external tools via OAuth, and interact through voice and text across web and desktop. Full billing, analytics, and admin built in. Orchestration: Agents run on a durable workflow engine with priority queuing, retries, pause/resume, state recovery across failures, and idempotency on critical mutations. Each agent maintains a long-lived entity workflow that serializes incoming work by channel priority, so a live user message always preempts a background schedule. Full context awareness is injected on every interaction including agent identity, state, capabilities, and team context. Agent Memory: Six memory layers stacked from immediate to long-term: thread-scoped conversation state, a structured work journal logging every action and delegation, self-curated agent notes where agents write their own observations and preferences, a bi-temporal knowledge graph that ingests every conversation into entities and relationships with validity tracking so newer facts automatically supersede outdated ones, a company knowledge graph built from ingested documents and connected data sources, and cross-agent delegation context for team knowledge transfer. Memory is adaptive: when facts change, agents revise their understanding rather than stacking contradictions. Agents learn from their own work outcomes and carry lessons forward. Agents interact with their memory through dedicated search and recall tools, and can explicitly mark information to learn and remember. Knowledge Retrieval & Training: A multi-stage search pipeline enriches queries with speaker context and recent keywords, anchors results to relevant entities in the graph, and falls back to broadened queries on zero results. A training system ingests data from file uploads, shared links, and third-party platforms like document stores, knowledge bases, and messaging tools through entity extraction and embedding generation, building a tenant-isolated knowledge base across graph and vector databases. Both agent memory and trained knowledge merge into a unified context block on every interaction. Work Management: Chat-first interface with inline file uploads. Task boards per agent for tracking work, user-assigned or agent-created during execution. Recurring schedules on any cadence, created by users or agents. Team-level goals, KPIs, and mission text that drive autonomous prioritization. Per-skill and per-tool approval gates where agents pause for human sign-off. Structured work journals where agents log what they did, learned, and plan to do. Drive: Every agent has a personal drive storing all work output: reports, documents, images, spreadsheets, code, and generated artifacts. Browsable, previewable, downloadable, and editable. Work lives in the drive, not buried in chat. Multi-Agent Collaboration: A delegation system lets agents invoke teammates as subagents with full context transfer, and delegation results flow back into the conversation timeline. Organization-level routing directs broad requests to the right teams. Teams behave like real departments: the leader coordinates, delegates to specialists, and assembles outputs. Visual drag-and-drop org structure management. Built-in web search gives agents access to live information. Safety & Governance: Prompt injection detection, output evaluation, PII redaction, topic control policies, and execution guardrails with cost alerts. Human-in-the-loop approval gates let agents pause for authorization on high-risk actions. Permissions and roles with admin, manager, and viewer access levels. Agent leaderboard ranking by productivity, cost efficiency, and output. Import and export for portable agent configurations. All policies are tenant-configurable. Observability: Real-time activity timeline with correlation IDs across agent chains, self-hosted LLM tracing with prompt inspection and token metrics, structured work journals aggregated across teams, and email and in-app notifications when work completes or needs attention. Voice & Channels: Full-duplex voice streaming feeds into the same agent pipeline as text. A unified dispatcher funnels all input channels (web chat, voice, API, webhooks, scheduled triggers, Slack, WhatsApp, Telegram, email, phone/video, and cross-agent delegation) through one entry point, keeping the execution layer fully channel-agnostic. Per-agent channel configuration. Office View: A 3D virtual office showing the entire organization at a glance, with a 2D fallback based on browser or device capabilities. Agents at desks, real-time status reflected visually, delegation shown as handoffs between team zones. Click any agent to jump into their workspace. Desktop Companion: A lightweight desktop app giving agents access to the user's browser and computer: reading screens, clicking buttons, filling forms, navigating applications, and operating software without APIs. Voice or text control. LLM Routing & Billing: Multi-provider model routing across three AI providers with per-agent overrides including model selection from cheaper models for routine tasks to reasoning models for complex work. Local model inference for development and end-to-end testing without external API costs. Streaming thinking tokens. Tiered credit billing consumed across LLM usage, compute runtime, workflows, tasks, and voice minutes with multi-threshold quota alerts and per-message cost breakdown. Usage-based pricing across Starter, Premium, and Enterprise tiers. Referral credit system. Marketplace: Platform-curated and community-published agents, teams, and skills available for hire or cloning. Cloned configurations are independent copies with full customization. Creator incentive system with credit rewards tied to clone usage and attribution tracking. Integrations: Over 850 external tool integrations via MCP with OAuth connection management, cached tool schemas, health checks, and an app recommendation engine that suggests relevant capabilities when tools are connected. Existing agents and workflows can be imported as outsourced employees or connected via MCP to collaborate with teams. The platform is accessible from web, desktop, mobile, browser extensions, and third-party platform apps (Slack, Teams, Discord, Shopify, WordPress, Zapier), with an embeddable widget, client SDKs, and a CLI for programmatic access. Storage: Eight database layers working together in a GraphRAG architecture. Relational for transactional data, workflow state, chat history, and billing. Graph for structural context where entities and their relationships are traversed with temporal validity windows. Vector for semantic retrieval that finds relevant knowledge by meaning. In-memory for caching, session management, WebSocket channel layers, and task brokering. Columnar for high-volume LLM trace analytics. Object storage for file uploads, generated artifacts, and blob data. Time-series for cluster and application metrics. Log aggregation for centralized container log collection. Graph and vector databases index the same entities in different formats and are searched in parallel, merging results into a unified context block. Architecture: Plugin-based modular codebase following ports and adapters. Each feature is an isolated module. Cross-module communication through an event bus, external services behind swappable adapter interfaces. Input and output channels use matching adapter registries, so adding a new channel requires only a new adapter with no changes to agent logic. User Interface: Responsive, accessible web and mobile-first interface with a modern design system and intuitive UX. Built with React and TypeScript over a GraphQL API. WebSocket subscriptions drive token-by-token streaming, live activity feeds, status transitions, delegation visibility, and reasoning token display for chain-of-thought models. Dual streaming channels ensure resilience. Full component library with isolated visual testing. The monorepo shares packages across frontend and backend with build caching and dependency graph-aware task execution. Security & Testing: Row-level multi-tenancy enforced at the ORM, parameterized queries, CSRF protection, rate limiting, and agent-side prompt injection detection. Social OAuth sign-in, two-factor authentication, and transactional email delivery. Product analytics, error tracking, and performance monitoring across frontend and backend. Payment integration with webhook sync and subscription management. Full-stack automated testing from unit through end-to-end browser automation against real LLM responses, with explicit tenant isolation verification. Infrastructure & DevOps: Over 20 containerized microservices orchestrated across development, staging, and production. Entire infrastructure defined as code with automated cluster provisioning, VM image builds, and declarative release management. Five CI/CD pipelines covering linting, testing, image builds, and deployment with auto-deploy to staging on merge and manual production gates with confirmation. Autoscaling on workflow queue depth. Cloud provider is swappable: four providers supported (three cloud, one local simulation) with the ability to add more by plugging in a new module. A full local Kubernetes environment simulates production inside Docker for infrastructure testing without cloud costs. Zero-downtime rolling deployments with single-command rollback.

Private Stack
View Details

ما يمكنني تقديمه

تعاون يسهّل عليك بناء ما تتخيّله.

حلول الأعمال

التطور الشخصي

المدوّنة

مقالات عن الذكاء الاصطناعي والهندسة.

Contact محمود الزلط: Email [email protected], Phone (+34) 692 616 422, WhatsApp https://wa.me/34692616422. Located in أمستردام, هولندا.

تواصل معي

اختر وسيلة التواصل المناسبة لك.

رسالة مباشرة

إرسال عبر الموقع

عنوان البريد الإلكتروني

رقم الهاتف

WhatsApp available

مكالمة فيديو

Google Meet

جدولة مكالمة

لنعمل معًا

اختر نوع التعاون المناسب لك.

استشارات AI

ابتداءً من 700 دولار / يوم

  • استراتيجية AI وقرارات تقنية
  • مراجعات المعمارية والأنظمة
  • وضوح المخاطر والنطاق والعائد
  • إرشاد التنفيذ
  • ضبط التكلفة والأداء
استكشف

إرشاد مهني

ابتداءً من 150 دولار / جلسة

  • نمو مهني وقيادي
  • توجيه تقني وانتقال AI
  • خطوات واضحة للمرحلة التالية
  • مراجعة السيرة/LinkedIn
  • تحضير المقابلات
ابدأ

بناء أنظمة AI

ابتداءً من 10,000 دولار / مشروع

  • تسليم منتج كامل
  • معمارية وبناء متقدم
  • أنظمة قابلة للتوسع وموثوقة
  • سباق اكتشاف وتحديد النطاق
  • توثيق وتسليم
اكتشف
التحدث في مؤتمر ITkonekt

تصميم النظام

معمارية قابلة للتوسع عمليًا