- PROJECT_STATUS.md : réécriture complète — phases 1-4b terminées à 100%, routes API exhaustives, fixes critiques documentés, à-faire priorisé - STRATEGY_GUIDE.md : ajout section ML-Driven Strategy avec features, labels, usage API et paramètres de configuration - AI_FRAMEWORK.md : ajout section ML-Driven + tableau statut implémentation, différenciation HMM/Optuna/MLStrategy - ARCHITECTURE.md : ajout structure réelle du code avec les nouveaux fichiers ml_strategy_model.py, features/, ml_driven/ annotés [NOUVEAU] Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
849 lines
28 KiB
Markdown
849 lines
28 KiB
Markdown
# 🤖 Framework IA Adaptative - Trading AI Secure
|
|
|
|
## 📋 Table des Matières
|
|
1. [Vue d'ensemble](#vue-densemble)
|
|
2. [Philosophie de l'IA Auto-Optimisante](#philosophie-de-lia-auto-optimisante)
|
|
3. [Architecture ML](#architecture-ml)
|
|
4. [Optimisation Continue des Paramètres](#optimisation-continue-des-paramètres)
|
|
5. [Regime Detection](#regime-detection)
|
|
6. [Position Sizing Adaptatif](#position-sizing-adaptatif)
|
|
7. [Validation et Anti-Overfitting](#validation-et-anti-overfitting)
|
|
8. [Implémentation Technique](#implémentation-technique)
|
|
|
|
---
|
|
|
|
## 🎯 Vue d'ensemble
|
|
|
|
Le framework IA de Trading AI Secure est conçu pour être **auto-adaptatif** et en **constante remise en question**. Contrairement aux systèmes traditionnels avec paramètres fixes, notre IA ajuste continuellement ses décisions en fonction :
|
|
|
|
- 📊 **Conditions de marché** (volatilité, tendance, liquidité)
|
|
- 📈 **Performance récente** (win rate, Sharpe ratio, drawdown)
|
|
- 🔗 **Corrélations inter-stratégies** (diversification)
|
|
- ⚠️ **Métriques de risque** (VaR, CVaR, max drawdown)
|
|
- 🌍 **Événements macro-économiques** (taux, inflation, sentiment)
|
|
|
|
---
|
|
|
|
## 🧠 Philosophie de l'IA Auto-Optimisante
|
|
|
|
### Principe Fondamental : "Doute Permanent"
|
|
|
|
Notre IA opère selon le principe du **doute méthodique** :
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────┐
|
|
│ CYCLE D'AUTO-AMÉLIORATION CONTINUE (24h) │
|
|
├─────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ 1. COLLECTE → Données marché + performance │
|
|
│ 2. ANALYSE → Détection dégradation/amélioration │
|
|
│ 3. HYPOTHÈSE → Nouveaux paramètres candidats │
|
|
│ 4. TEST → Backtesting + Monte Carlo │
|
|
│ 5. VALIDATION → A/B testing paper trading │
|
|
│ 6. DÉPLOIEMENT → Adoption progressive si validé │
|
|
│ 7. MONITORING → Surveillance performance │
|
|
│ │
|
|
│ ↻ RETOUR À L'ÉTAPE 1 │
|
|
└─────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### Questions Permanentes de l'IA
|
|
|
|
L'IA se pose continuellement ces questions :
|
|
|
|
1. **Mes paramètres actuels sont-ils toujours optimaux ?**
|
|
- Comparaison performance vs. variantes
|
|
- Détection de drift statistique
|
|
|
|
2. **Le régime de marché a-t-il changé ?**
|
|
- Bull → Bear → Sideways
|
|
- Haute volatilité → Basse volatilité
|
|
- Trending → Mean-reverting
|
|
|
|
3. **Mes prédictions sont-elles calibrées ?**
|
|
- Probabilités prédites vs. réalisées
|
|
- Brier score, log-loss
|
|
|
|
4. **Mon sizing est-il adapté au risque actuel ?**
|
|
- Kelly Criterion dynamique
|
|
- Ajustement selon drawdown
|
|
|
|
5. **Existe-t-il de meilleures combinaisons de features ?**
|
|
- Feature importance évolutive
|
|
- Sélection automatique
|
|
|
|
---
|
|
|
|
## 🏗️ Architecture ML
|
|
|
|
### Stack Technologique
|
|
|
|
```python
|
|
# Core ML
|
|
scikit-learn==1.4.0 # Modèles de base
|
|
xgboost==2.0.3 # Gradient boosting
|
|
lightgbm==4.1.0 # Alternative rapide
|
|
catboost==1.2.2 # Categorical features
|
|
|
|
# Optimisation
|
|
optuna==3.5.0 # Bayesian optimization
|
|
hyperopt==0.2.7 # Alternative optimization
|
|
ray[tune]==2.9.0 # Distributed tuning
|
|
|
|
# Time Series
|
|
statsmodels==0.14.1 # ARIMA, GARCH
|
|
arch==6.2.0 # Volatility models
|
|
prophet==1.1.5 # Forecasting
|
|
|
|
# Deep Learning (optionnel)
|
|
tensorflow==2.15.0 # Neural networks
|
|
pytorch==2.1.2 # Alternative DL
|
|
keras-tuner==1.4.6 # Hyperparameter tuning
|
|
|
|
# Reinforcement Learning
|
|
stable-baselines3==2.2.1 # RL algorithms
|
|
gym==0.26.2 # RL environment
|
|
```
|
|
|
|
### Pipeline ML Multi-Niveaux
|
|
|
|
```
|
|
┌──────────────────────────────────────────────────────────────┐
|
|
│ ENSEMBLE ADAPTATIF │
|
|
├──────────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
|
│ │ Modèle 1 │ │ Modèle 2 │ │ Modèle 3 │ │
|
|
│ │ XGBoost │ │ LightGBM │ │ CatBoost │ │
|
|
│ │ (Trending) │ │(Mean-Rev.) │ │ (Volatility)│ │
|
|
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
|
|
│ │ │ │ │
|
|
│ └────────────────┼────────────────┘ │
|
|
│ │ │
|
|
│ ┌──────▼──────┐ │
|
|
│ │ META- │ │
|
|
│ │ LEARNER │ │
|
|
│ │ (Stacking) │ │
|
|
│ └──────┬──────┘ │
|
|
│ │ │
|
|
│ ┌──────▼──────┐ │
|
|
│ │ REGIME │ │
|
|
│ │ DETECTOR │ │
|
|
│ │ (Weights) │ │
|
|
│ └──────┬──────┘ │
|
|
│ │ │
|
|
│ ┌──────▼──────┐ │
|
|
│ │ FINAL │ │
|
|
│ │ DECISION │ │
|
|
│ └─────────────┘ │
|
|
└──────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
---
|
|
|
|
## ⚙️ Optimisation Continue des Paramètres
|
|
|
|
### 1. Optimisation Bayésienne (Optuna)
|
|
|
|
**Fréquence** : Quotidienne (après clôture marché)
|
|
|
|
**Paramètres Optimisés** :
|
|
|
|
```python
|
|
# Exemple configuration Optuna
|
|
def objective(trial):
|
|
params = {
|
|
# Modèle
|
|
'n_estimators': trial.suggest_int('n_estimators', 100, 1000),
|
|
'max_depth': trial.suggest_int('max_depth', 3, 12),
|
|
'learning_rate': trial.suggest_float('learning_rate', 0.001, 0.3, log=True),
|
|
|
|
# Features
|
|
'lookback_period': trial.suggest_int('lookback_period', 10, 100),
|
|
'volatility_window': trial.suggest_int('volatility_window', 5, 50),
|
|
|
|
# Trading
|
|
'stop_loss_atr_mult': trial.suggest_float('stop_loss_atr_mult', 1.0, 5.0),
|
|
'take_profit_ratio': trial.suggest_float('take_profit_ratio', 1.5, 5.0),
|
|
'min_probability': trial.suggest_float('min_probability', 0.5, 0.8),
|
|
|
|
# Risk
|
|
'kelly_fraction': trial.suggest_float('kelly_fraction', 0.1, 0.5),
|
|
'max_position_size': trial.suggest_float('max_position_size', 0.01, 0.1),
|
|
}
|
|
|
|
# Backtesting avec paramètres
|
|
sharpe = backtest_strategy(params)
|
|
return sharpe
|
|
```
|
|
|
|
**Contraintes de Sécurité** :
|
|
|
|
```python
|
|
# Limites strictes pour éviter paramètres dangereux
|
|
PARAMETER_CONSTRAINTS = {
|
|
'max_position_size': {'min': 0.01, 'max': 0.10}, # 1-10% max
|
|
'stop_loss_atr_mult': {'min': 1.0, 'max': 5.0}, # Stop raisonnable
|
|
'kelly_fraction': {'min': 0.1, 'max': 0.5}, # Kelly conservateur
|
|
'min_probability': {'min': 0.5, 'max': 0.9}, # Seuil décision
|
|
}
|
|
```
|
|
|
|
### 2. A/B Testing Automatique
|
|
|
|
**Principe** : Tester simultanément plusieurs variantes de stratégies
|
|
|
|
```python
|
|
class ABTestingEngine:
|
|
"""
|
|
Teste 2-3 variantes de paramètres en parallèle
|
|
sur paper trading pendant 7 jours
|
|
"""
|
|
|
|
def __init__(self):
|
|
self.variants = {
|
|
'control': current_params, # Paramètres actuels
|
|
'variant_a': optimized_params_1, # Optuna suggestion 1
|
|
'variant_b': optimized_params_2, # Optuna suggestion 2
|
|
}
|
|
|
|
def allocate_capital(self):
|
|
"""Allocation capital par variante"""
|
|
return {
|
|
'control': 0.50, # 50% capital actuel
|
|
'variant_a': 0.25, # 25% variante A
|
|
'variant_b': 0.25, # 25% variante B
|
|
}
|
|
|
|
def evaluate_winner(self, results: Dict):
|
|
"""
|
|
Critères de sélection :
|
|
- Sharpe Ratio > control + 10%
|
|
- Max Drawdown < control
|
|
- Win Rate > control
|
|
- Profit Factor > control
|
|
"""
|
|
winner = max(results, key=lambda x: results[x]['sharpe'])
|
|
|
|
if self.is_significantly_better(winner, 'control'):
|
|
return winner
|
|
return 'control' # Conserver actuel si pas mieux
|
|
```
|
|
|
|
### 3. Reinforcement Learning pour Position Sizing
|
|
|
|
**Approche** : Agent RL apprend le sizing optimal
|
|
|
|
```python
|
|
import gym
|
|
from stable_baselines3 import PPO
|
|
|
|
class TradingEnvironment(gym.Env):
|
|
"""
|
|
Environment RL pour position sizing
|
|
|
|
State: [portfolio_value, current_drawdown, volatility,
|
|
win_rate_recent, correlation_portfolio, regime]
|
|
|
|
Action: position_size (0.0 à max_position_size)
|
|
|
|
Reward: Sharpe ratio - penalty(drawdown) - penalty(correlation)
|
|
"""
|
|
|
|
def __init__(self):
|
|
self.action_space = gym.spaces.Box(
|
|
low=0.0, high=0.1, shape=(1,)
|
|
)
|
|
self.observation_space = gym.spaces.Box(
|
|
low=-np.inf, high=np.inf, shape=(6,)
|
|
)
|
|
|
|
def step(self, action):
|
|
position_size = action[0]
|
|
|
|
# Simuler trade avec ce sizing
|
|
pnl = self.simulate_trade(position_size)
|
|
|
|
# Calculer reward
|
|
reward = self.calculate_reward(pnl, position_size)
|
|
|
|
return next_state, reward, done, info
|
|
|
|
def calculate_reward(self, pnl, position_size):
|
|
"""
|
|
Reward = PnL ajusté du risque
|
|
Pénalités :
|
|
- Drawdown excessif
|
|
- Position trop grande
|
|
- Corrélation élevée
|
|
"""
|
|
reward = pnl
|
|
|
|
if self.current_drawdown > 0.05:
|
|
reward -= 10 * self.current_drawdown
|
|
|
|
if position_size > 0.05:
|
|
reward -= 5 * (position_size - 0.05)
|
|
|
|
return reward
|
|
|
|
# Entraînement
|
|
env = TradingEnvironment()
|
|
model = PPO("MlpPolicy", env, verbose=1)
|
|
model.learn(total_timesteps=100000)
|
|
```
|
|
|
|
### 4. Parameter Drift Detection
|
|
|
|
**Objectif** : Détecter quand paramètres deviennent obsolètes
|
|
|
|
```python
|
|
from scipy import stats
|
|
|
|
class ParameterDriftDetector:
|
|
"""
|
|
Détecte changements statistiques dans performance
|
|
"""
|
|
|
|
def __init__(self, window=30):
|
|
self.window = window
|
|
self.historical_sharpe = []
|
|
|
|
def detect_drift(self, current_sharpe: float) -> bool:
|
|
"""
|
|
Test statistique : performance actuelle vs. historique
|
|
"""
|
|
self.historical_sharpe.append(current_sharpe)
|
|
|
|
if len(self.historical_sharpe) < self.window:
|
|
return False
|
|
|
|
# Test t de Student
|
|
recent = self.historical_sharpe[-7:] # 7 derniers jours
|
|
baseline = self.historical_sharpe[-self.window:-7]
|
|
|
|
t_stat, p_value = stats.ttest_ind(recent, baseline)
|
|
|
|
# Drift détecté si p < 0.05 et performance dégradée
|
|
if p_value < 0.05 and np.mean(recent) < np.mean(baseline):
|
|
return True
|
|
|
|
return False
|
|
|
|
def trigger_reoptimization(self):
|
|
"""Lance optimisation Optuna si drift détecté"""
|
|
logger.warning("Parameter drift detected! Triggering reoptimization...")
|
|
run_optuna_optimization()
|
|
```
|
|
|
|
---
|
|
|
|
## 🎭 Regime Detection
|
|
|
|
### Détection de Régimes de Marché
|
|
|
|
**Objectif** : Adapter stratégies selon régime (Bull/Bear/Sideways)
|
|
|
|
```python
|
|
from hmmlearn import hmm
|
|
import numpy as np
|
|
|
|
class MarketRegimeDetector:
|
|
"""
|
|
Hidden Markov Model pour détecter régimes
|
|
|
|
États :
|
|
- 0: Bull Market (trending up, low volatility)
|
|
- 1: Bear Market (trending down, high volatility)
|
|
- 2: Sideways (no trend, medium volatility)
|
|
"""
|
|
|
|
def __init__(self, n_regimes=3):
|
|
self.model = hmm.GaussianHMM(
|
|
n_components=n_regimes,
|
|
covariance_type="full",
|
|
n_iter=1000
|
|
)
|
|
|
|
def fit(self, returns, volatility):
|
|
"""
|
|
Entraîne HMM sur données historiques
|
|
"""
|
|
features = np.column_stack([returns, volatility])
|
|
self.model.fit(features)
|
|
|
|
def predict_regime(self, recent_returns, recent_volatility):
|
|
"""
|
|
Prédit régime actuel
|
|
"""
|
|
features = np.column_stack([recent_returns, recent_volatility])
|
|
regime = self.model.predict(features)[-1]
|
|
|
|
regime_names = {0: 'BULL', 1: 'BEAR', 2: 'SIDEWAYS'}
|
|
return regime_names[regime]
|
|
|
|
def get_regime_probabilities(self, recent_data):
|
|
"""
|
|
Probabilités de chaque régime
|
|
"""
|
|
return self.model.predict_proba(recent_data)[-1]
|
|
```
|
|
|
|
### Adaptation Stratégies par Régime
|
|
|
|
```python
|
|
REGIME_STRATEGY_WEIGHTS = {
|
|
'BULL': {
|
|
'scalping': 0.2,
|
|
'intraday': 0.5, # Favoriser intraday en bull
|
|
'swing': 0.3,
|
|
},
|
|
'BEAR': {
|
|
'scalping': 0.4, # Favoriser scalping en bear
|
|
'intraday': 0.3,
|
|
'swing': 0.1, # Réduire swing en bear
|
|
'short_bias': 0.2, # Activer short bias
|
|
},
|
|
'SIDEWAYS': {
|
|
'scalping': 0.5, # Favoriser scalping en sideways
|
|
'intraday': 0.3,
|
|
'swing': 0.2,
|
|
}
|
|
}
|
|
|
|
def adjust_strategy_allocation(regime: str) -> Dict[str, float]:
|
|
"""
|
|
Ajuste allocation capital par stratégie selon régime
|
|
"""
|
|
return REGIME_STRATEGY_WEIGHTS[regime]
|
|
```
|
|
|
|
---
|
|
|
|
## 📏 Position Sizing Adaptatif
|
|
|
|
### Kelly Criterion Dynamique
|
|
|
|
```python
|
|
class AdaptiveKellyCriterion:
|
|
"""
|
|
Kelly Criterion avec ajustements dynamiques
|
|
|
|
Kelly% = (p * b - q) / b
|
|
où :
|
|
- p = probabilité de gain (prédite par ML)
|
|
- q = probabilité de perte (1 - p)
|
|
- b = ratio gain/perte moyen
|
|
"""
|
|
|
|
def __init__(self, kelly_fraction=0.25):
|
|
self.kelly_fraction = kelly_fraction # Fraction conservatrice
|
|
|
|
def calculate_position_size(
|
|
self,
|
|
win_probability: float,
|
|
avg_win: float,
|
|
avg_loss: float,
|
|
current_drawdown: float,
|
|
portfolio_volatility: float
|
|
) -> float:
|
|
"""
|
|
Calcule taille position optimale
|
|
"""
|
|
# Kelly de base
|
|
b = avg_win / abs(avg_loss)
|
|
kelly = (win_probability * b - (1 - win_probability)) / b
|
|
|
|
# Ajustements dynamiques
|
|
kelly = self._adjust_for_drawdown(kelly, current_drawdown)
|
|
kelly = self._adjust_for_volatility(kelly, portfolio_volatility)
|
|
kelly = self._adjust_for_confidence(kelly, win_probability)
|
|
|
|
# Appliquer fraction conservatrice
|
|
position_size = kelly * self.kelly_fraction
|
|
|
|
# Limites strictes
|
|
return np.clip(position_size, 0.01, 0.10)
|
|
|
|
def _adjust_for_drawdown(self, kelly: float, drawdown: float) -> float:
|
|
"""
|
|
Réduire sizing si drawdown élevé
|
|
"""
|
|
if drawdown > 0.05: # > 5% drawdown
|
|
reduction = 1 - (drawdown / 0.10) # Réduction linéaire
|
|
kelly *= max(reduction, 0.5) # Min 50% du Kelly
|
|
return kelly
|
|
|
|
def _adjust_for_volatility(self, kelly: float, volatility: float) -> float:
|
|
"""
|
|
Réduire sizing si volatilité élevée
|
|
"""
|
|
if volatility > 0.02: # > 2% volatilité quotidienne
|
|
kelly *= (0.02 / volatility)
|
|
return kelly
|
|
|
|
def _adjust_for_confidence(self, kelly: float, probability: float) -> float:
|
|
"""
|
|
Réduire sizing si faible confiance
|
|
"""
|
|
if probability < 0.6:
|
|
kelly *= (probability / 0.6)
|
|
return kelly
|
|
```
|
|
|
|
---
|
|
|
|
## ✅ Validation et Anti-Overfitting
|
|
|
|
### Walk-Forward Analysis
|
|
|
|
```python
|
|
class WalkForwardValidator:
|
|
"""
|
|
Validation temporelle rigoureuse
|
|
|
|
Principe :
|
|
1. Entraîner sur fenêtre N
|
|
2. Tester sur fenêtre N+1
|
|
3. Glisser fenêtre
|
|
4. Répéter
|
|
"""
|
|
|
|
def __init__(self, train_window=252, test_window=63):
|
|
self.train_window = train_window # 1 an
|
|
self.test_window = test_window # 3 mois
|
|
|
|
def validate(self, data, strategy):
|
|
"""
|
|
Effectue walk-forward analysis
|
|
"""
|
|
results = []
|
|
|
|
for i in range(0, len(data) - self.train_window - self.test_window, self.test_window):
|
|
# Fenêtre entraînement
|
|
train_data = data[i:i+self.train_window]
|
|
|
|
# Fenêtre test
|
|
test_data = data[i+self.train_window:i+self.train_window+self.test_window]
|
|
|
|
# Entraîner
|
|
strategy.fit(train_data)
|
|
|
|
# Tester
|
|
performance = strategy.backtest(test_data)
|
|
results.append(performance)
|
|
|
|
return self._aggregate_results(results)
|
|
|
|
def _aggregate_results(self, results):
|
|
"""
|
|
Agrège résultats walk-forward
|
|
"""
|
|
return {
|
|
'mean_sharpe': np.mean([r['sharpe'] for r in results]),
|
|
'std_sharpe': np.std([r['sharpe'] for r in results]),
|
|
'mean_drawdown': np.mean([r['max_drawdown'] for r in results]),
|
|
'worst_period': min(results, key=lambda x: x['sharpe']),
|
|
}
|
|
```
|
|
|
|
### Monte Carlo Simulation
|
|
|
|
```python
|
|
class MonteCarloValidator:
|
|
"""
|
|
Simulation Monte Carlo pour robustesse
|
|
"""
|
|
|
|
def __init__(self, n_simulations=10000):
|
|
self.n_simulations = n_simulations
|
|
|
|
def simulate(self, strategy, historical_trades):
|
|
"""
|
|
Simule N scénarios en réordonnant trades
|
|
"""
|
|
results = []
|
|
|
|
for _ in range(self.n_simulations):
|
|
# Réordonner trades aléatoirement
|
|
shuffled_trades = np.random.permutation(historical_trades)
|
|
|
|
# Calculer métriques
|
|
sharpe = self._calculate_sharpe(shuffled_trades)
|
|
max_dd = self._calculate_max_drawdown(shuffled_trades)
|
|
|
|
results.append({'sharpe': sharpe, 'max_dd': max_dd})
|
|
|
|
return self._analyze_distribution(results)
|
|
|
|
def _analyze_distribution(self, results):
|
|
"""
|
|
Analyse distribution résultats
|
|
"""
|
|
sharpes = [r['sharpe'] for r in results]
|
|
drawdowns = [r['max_dd'] for r in results]
|
|
|
|
return {
|
|
'sharpe_mean': np.mean(sharpes),
|
|
'sharpe_5th_percentile': np.percentile(sharpes, 5),
|
|
'sharpe_95th_percentile': np.percentile(sharpes, 95),
|
|
'max_dd_mean': np.mean(drawdowns),
|
|
'max_dd_95th_percentile': np.percentile(drawdowns, 95),
|
|
'probability_positive_sharpe': np.mean(np.array(sharpes) > 0),
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## 💻 Implémentation Technique
|
|
|
|
### Architecture Complète
|
|
|
|
```python
|
|
# src/ml/adaptive_ai_engine.py
|
|
|
|
from typing import Dict, List, Optional
|
|
import optuna
|
|
import numpy as np
|
|
from dataclasses import dataclass
|
|
|
|
@dataclass
|
|
class AIConfig:
|
|
"""Configuration IA adaptative"""
|
|
optimization_frequency: str = 'daily' # daily, weekly
|
|
ab_test_duration_days: int = 7
|
|
parameter_drift_window: int = 30
|
|
kelly_fraction: float = 0.25
|
|
min_sharpe_improvement: float = 0.1 # 10% amélioration minimum
|
|
|
|
class AdaptiveAIEngine:
|
|
"""
|
|
Moteur IA adaptatif central
|
|
|
|
Responsabilités :
|
|
- Optimisation continue paramètres
|
|
- Détection régimes
|
|
- Position sizing adaptatif
|
|
- Validation anti-overfitting
|
|
"""
|
|
|
|
def __init__(self, config: AIConfig):
|
|
self.config = config
|
|
|
|
# Composants
|
|
self.optimizer = OptunaOptimizer()
|
|
self.regime_detector = MarketRegimeDetector()
|
|
self.kelly_calculator = AdaptiveKellyCriterion(config.kelly_fraction)
|
|
self.drift_detector = ParameterDriftDetector()
|
|
self.ab_tester = ABTestingEngine()
|
|
|
|
# État
|
|
self.current_params = {}
|
|
self.current_regime = 'SIDEWAYS'
|
|
self.performance_history = []
|
|
|
|
async def daily_optimization_cycle(self):
|
|
"""
|
|
Cycle d'optimisation quotidien
|
|
"""
|
|
logger.info("Starting daily optimization cycle...")
|
|
|
|
# 1. Détecter drift
|
|
if self.drift_detector.detect_drift(self.get_recent_sharpe()):
|
|
logger.warning("Parameter drift detected!")
|
|
|
|
# 2. Optimiser nouveaux paramètres
|
|
new_params = await self.optimizer.optimize()
|
|
|
|
# 3. Valider avec walk-forward
|
|
if self._validate_params(new_params):
|
|
# 4. Lancer A/B test
|
|
self.ab_tester.add_variant('optimized', new_params)
|
|
|
|
# 5. Détecter régime
|
|
self.current_regime = self.regime_detector.predict_regime(
|
|
self.get_recent_returns(),
|
|
self.get_recent_volatility()
|
|
)
|
|
|
|
# 6. Ajuster allocations
|
|
self._adjust_strategy_weights(self.current_regime)
|
|
|
|
logger.info(f"Optimization cycle complete. Regime: {self.current_regime}")
|
|
|
|
def calculate_position_size(
|
|
self,
|
|
signal_probability: float,
|
|
current_price: float,
|
|
portfolio_value: float
|
|
) -> float:
|
|
"""
|
|
Calcule taille position optimale
|
|
"""
|
|
# Métriques actuelles
|
|
current_dd = self.get_current_drawdown()
|
|
portfolio_vol = self.get_portfolio_volatility()
|
|
|
|
# Kelly adaptatif
|
|
kelly_size = self.kelly_calculator.calculate_position_size(
|
|
win_probability=signal_probability,
|
|
avg_win=self.get_avg_win(),
|
|
avg_loss=self.get_avg_loss(),
|
|
current_drawdown=current_dd,
|
|
portfolio_volatility=portfolio_vol
|
|
)
|
|
|
|
# Convertir en nombre d'unités
|
|
position_value = portfolio_value * kelly_size
|
|
units = position_value / current_price
|
|
|
|
return units
|
|
|
|
def _validate_params(self, params: Dict) -> bool:
|
|
"""
|
|
Valide nouveaux paramètres
|
|
"""
|
|
validator = WalkForwardValidator()
|
|
results = validator.validate(self.get_historical_data(), params)
|
|
|
|
# Critères validation
|
|
if results['mean_sharpe'] < 1.5:
|
|
return False
|
|
if results['mean_drawdown'] > 0.10:
|
|
return False
|
|
|
|
return True
|
|
|
|
def get_model_explanation(self, prediction: float) -> Dict:
|
|
"""
|
|
Explique décision du modèle (SHAP values)
|
|
"""
|
|
import shap
|
|
|
|
explainer = shap.TreeExplainer(self.model)
|
|
shap_values = explainer.shap_values(self.current_features)
|
|
|
|
return {
|
|
'prediction': prediction,
|
|
'feature_importance': dict(zip(
|
|
self.feature_names,
|
|
shap_values[0]
|
|
)),
|
|
'base_value': explainer.expected_value
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## 📊 Métriques de Monitoring IA
|
|
|
|
### Dashboard Métriques
|
|
|
|
```python
|
|
AI_METRICS = {
|
|
'optimization': {
|
|
'last_optimization_date': datetime,
|
|
'optimization_frequency': str,
|
|
'parameters_changed': int,
|
|
'improvement_sharpe': float,
|
|
},
|
|
'regime_detection': {
|
|
'current_regime': str,
|
|
'regime_probability': float,
|
|
'regime_changes_last_30d': int,
|
|
},
|
|
'parameter_drift': {
|
|
'drift_detected': bool,
|
|
'drift_magnitude': float,
|
|
'days_since_last_drift': int,
|
|
},
|
|
'ab_testing': {
|
|
'active_tests': int,
|
|
'winning_variant': str,
|
|
'improvement_vs_control': float,
|
|
},
|
|
'model_performance': {
|
|
'sharpe_ratio_7d': float,
|
|
'sharpe_ratio_30d': float,
|
|
'calibration_score': float, # Brier score
|
|
'feature_stability': float,
|
|
}
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## Statut d'Implémentation (2026-03-08)
|
|
|
|
| Composant | Statut | Fichier |
|
|
|---|---|---|
|
|
| Optimisation Optuna | ✅ Terminé | `src/ml/parameter_optimizer.py` |
|
|
| Regime detection HMM | ✅ Terminé | `src/ml/regime_detector.py` |
|
|
| Kelly Criterion adaptatif | ✅ Terminé | `src/ml/position_sizing.py` |
|
|
| Walk-forward validation | ✅ Terminé | `src/ml/walk_forward.py` |
|
|
| Feature Engineering | ✅ Terminé | `src/ml/feature_engineering.py` |
|
|
| ML-Driven Strategy | ✅ Terminé | voir section ci-dessous |
|
|
| A/B testing automatique | ⚪ Planifié | — |
|
|
| Reinforcement Learning | ⚪ Planifié | — |
|
|
| Sentiment Analysis | ⚪ Planifié | — |
|
|
|
|
---
|
|
|
|
## ML-Driven Strategy — Apprentissage des Patterns Humains (2026-03-08)
|
|
|
|
### Concept
|
|
|
|
La **MLDrivenStrategy** est une couche d'IA supplémentaire qui apprend à reproduire
|
|
les décisions de trading basées sur des indicateurs techniques classiques.
|
|
|
|
Plutôt que de coder des règles manuellement ("si RSI < 30 ET prix proche support →
|
|
acheter"), on laisse XGBoost/LightGBM découvrir ces combinaisons automatiquement
|
|
depuis des milliers de barres historiques.
|
|
|
|
### Architecture
|
|
|
|
```
|
|
OHLCV historique
|
|
↓
|
|
TechnicalFeatureBuilder ← RSI, MACD, BB, S/R, Pivots, Chandeliers, EMAs...
|
|
↓
|
|
LabelGenerator ← Forward simulation : TP/SL atteint ? → LONG/SHORT/NEUTRAL
|
|
↓
|
|
XGBoost / LightGBM ← Entraînement supervisé + walk-forward 3 folds
|
|
↓
|
|
MLStrategyModel (joblib) ← Sauvegardé dans models/ml_strategy/
|
|
↓
|
|
MLDrivenStrategy.analyze() ← Signal + confidence score → RiskManager
|
|
```
|
|
|
|
### Différence avec les autres composants ML
|
|
|
|
| Composant | Rôle |
|
|
|---|---|
|
|
| `RegimeDetector (HMM)` | Détecte le régime global (trend/range/volatile) |
|
|
| `MLEngine` | Adapte les paramètres des stratégies selon le régime |
|
|
| `ParameterOptimizer (Optuna)` | Optimise les hyperparamètres des stratégies existantes |
|
|
| **`MLStrategyModel` (nouveau)** | **Apprend directement les signaux depuis les features TA** |
|
|
|
|
### Fichiers
|
|
- `src/ml/features/technical_features.py` — TechnicalFeatureBuilder
|
|
- `src/ml/features/label_generator.py` — LabelGenerator
|
|
- `src/ml/ml_strategy_model.py` — MLStrategyModel
|
|
- `src/strategies/ml_driven/ml_strategy.py` — MLDrivenStrategy
|
|
|
|
Documentation détaillée : [docs/ML_STRATEGY_GUIDE.md](ML_STRATEGY_GUIDE.md)
|
|
|
|
---
|
|
|
|
## Prochaines Étapes
|
|
|
|
- [ ] Tester MLDrivenStrategy sur EURUSD/1h (POST /trading/train)
|
|
- [ ] Comparer performances vs ScalpingStrategy en backtest
|
|
- [ ] Sentiment Analysis — `src/data/sentiment_service.py` (Alpha Vantage News + FinBERT)
|
|
- [ ] Persistance HMM — sauvegarder modèle pour éviter re-training à chaque /ml/status
|
|
- [ ] IG Markets connector (Phase 5)
|
|
|
|
---
|
|
|
|
**Note** : Ce framework garantit que l'IA reste **adaptative** et **auto-critique**, ajustant continuellement ses décisions pour maximiser la performance ajustée du risque.
|