Wind profile generation is achieved either through the use of physical wind simulators or data-driven approaches. This research project falls into the latter category, aiming to train AI models using Lidar data, thereby providing the differentiation necessary to meet IFPEN's competitiveness goals. Specifically, this PhD aims to contribute to upstream methods for wind turbine design and downstream methods for turbine optimization (such as increasing efficiency and monitoring fatigue). By leveraging real-world data collected from a campaign operated by a consortium of public and private partners, the goal is to integrate information that cannot be captured through approaches based solely on physical wind simulations into the generated wind profiles. A classical approach in simulating wind fields makes use of stochastic wind generators (e.g., NREL's TurbSim). These simulators rely on a set of assumptions (e.g., Taylor's frozen turbulence hypothesis) and models (such as Kaimal and Mann wind spectra, spatial coherence, and shear models), which often fail to represent real-world cases. Specifically, modeling disturbances that occur in the wake of a wind turbine remains challenging. The objective is to train generative AI models on real-world data to produce synthetic winds that are representative of real and varied conditions. Preliminary work before this PhD explored the use of GANs (Generative Adversarial Networks), which consist of a generator network and a discriminator network trained simultaneously in opposition. The generator produces synthetic wind, while the discriminator evaluates whether the wind comes from the generator or the training data. Two specific architectures were explored: TGAN and MoCoGAN. TGAN (Temporal Generative Adversarial Network) incorporates a temporal aspect by decomposing the generation phase into a temporal generator and an image generator. By separating the temporal and spatial aspects, fewer parameters are needed as the same image generator is used for each frame. MoCoGAN (Motion and Content Generative Adversarial Network) builds upon this decomposition to enhance the temporal quality of the generated examples. It uses an LSTM for the temporal generator, allowing the generation of an unlimited number of frames. Additionally, the discriminator's workload is divided between evaluating images and videos. However, these approaches encountered limitations. The TGAN architecture cannot generate sequences of the desired length due to its architecture, which only uses convolutional networks. While MoCoGAN partially addresses this issue, the limitation persists because the discriminator network remains convolutional. Model evaluation could be improved with more objective metrics and feedback from domain experts. Moreover, other AI architectures and approaches are needed to produce longer simulated sequences. The research work will thus involve the following stages: Developing evaluation metrics: Conduct a thorough literature review to identify elements that can help distinguish synthetic winds from real winds. Designing new AI architectures: Develop innovative structures with suitable cost functions. Validating and testing on real data: Assess the developed approach's ability to generate data under conditions far removed from the standard assumptions of wind simulators, focusing on the GAN's performance in such scenarios.