Machine Learning & Simulation
Machine Learning & Simulation
  • 194
  • 788 719
Unrolled vs. Implicit Autodiff
Unrolled Differentiation of an iterative algorithm can produce the "Curse of Unrolling" phenomenon on the Jacobian suboptimality. How much better is implicit automatic differentiation? Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/adjoints_sensitivities_automatic_differentiation/curse_of_unrolling_against_implicit_diff.ipynb
-----
👉 This educational series is supported by the world-leaders in integrating machine learning and artificial intelligence with simulation and scientific computing, Pasteur Labs and Institute for Simulation Intelligence. Check out simulation.science/ for more on their pursuit of 'Nobel-Turing' technologies (arxiv.org/abs/2112.03235 ), and for partnership or career opportunities.
-------
📝 : Check out the GitHub Repository of the channel, where I upload all the handwritten notes and source-code files (contributions are very welcome): github.com/Ceyron/machine-learning-and-simulation
📢 : Follow me on LinkedIn or Twitter for updates on the channel and other cool Machine Learning & Simulation stuff: www.linkedin.com/in/felix-koehler and felix_m_koehler
💸 : If you want to support my work on the channel, you can become a Patreon here: www.patreon.com/MLsim
🪙: Or you can make a one-time donation via PayPal: www.paypal.com/paypalme/FelixMKoehler
-------
⚙️ My Gear:
(Below are affiliate links to Amazon. If you decide to purchase the product or something else on Amazon through this link, I earn a small commission.)
- 🎙️ Microphone: Blue Yeti: amzn.to/3NU7OAs
- ⌨️ Logitech TKL Mechanical Keyboard: amzn.to/3JhEtwp
- 🎨 Gaomon Drawing Tablet (similar to a WACOM Tablet, but cheaper, works flawlessly under Linux): amzn.to/37katmf
- 🔌 Laptop Charger: amzn.to/3ja0imP
- 💻 My Laptop (generally I like the Dell XPS series): amzn.to/38xrABL
- 📱 My Phone: Fairphone 4 (I love the sustainability and repairability aspect of it): amzn.to/3Jr4ZmV
If I had to purchase these items again, I would probably change the following:
- 🎙️ Rode NT: amzn.to/3NUIGtw
- 💻 Framework Laptop (I do not get a commission here, but I love the vision of Framework. It will definitely be my next Ultrabook): frame.work
As an Amazon Associate I earn from qualifying purchases.
-------
Timestamps:
00:00 Intro
00:00 Recap
01:55 Theory of Implicit Diff
03:32 Compute implicit Jacobian
04:00 Plotting and discussion
06:23 Outro
Переглядів: 671

Відео

Unrolled Autodiff of iterative Algorithms
Переглядів 1,3 тис.28 днів тому
When you have iterative parts in a computational graph (like optimization problems, linear solves, root-finding etc.) you can either unroll differentiate or implicitly differentiate them. The former has a counter-intuitive Jacobian converge (=Curse of Unrolling). Code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/adjoints_sensitivities_automatic_differentiation/curse_of_u...
UNet Tutorial in JAX
Переглядів 1,2 тис.Місяць тому
UNets are a famous architecture for image segmentation. With their hierarchical structure they have a wide receptive field. Similar to multigrid methods, we will use them in this video to solve the Poisson equation in the Equinox deep learning framework. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/neural_operators/simple_unet_poisson_solver_in_jax.ipynb...
DeepONet Tutorial in JAX
Переглядів 2 тис.2 місяці тому
Neural operators are deep learning architectures that approximate nonlinear operators, for instance, to learn the solution to a parametric PDE. The DeepONet is one type in which we can query the output at arbitrary points. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/neural_operators/simple_deepOnet_in_JAX.ipynb 👉 This educational series is supported by ...
Spectral Derivative in 3d using NumPy and the RFFT
Переглядів 9872 місяці тому
The Fast Fourier Transform works in arbitrary dimensions. Hence, we can also use it to derive n-dimensional fields spectrally. In this video, we clarify the details of this procedure, including how to adapt the np.meshgrid indexing style. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/fft_and_spectral_methods/spectral_derivative_3d_in_numpy_with_rfft.ipynb...
NumPy.fft.rfft2 - real-valued spectral derivatives in 2D
Переглядів 5103 місяці тому
How does the real-valued fast Fourier transformation work in two dimensions? The Fourier shape becomes a bit tricky when only one axis is halved, requiring special care when setting up the wavenumber array. Here is the notebook: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/fft_and_spectral_methods/spectral_derivative_2d_in_numpy_with_rfft.ipynb 👉 This educational series i...
2D Spectral Derivatives with NumPy.FFT
Переглядів 1,1 тис.3 місяці тому
The Fast Fourier Transform allows to easily take derivatives of periodic functions. In this video, we look at how this concept extends to two dimensions, such as how to create the wavenumber grid and how to deal with partial derivatives. Here is the notebook: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/fft_and_spectral_methods/spectral_derivative_2d_in_numpy.ipynb 👉 This...
Softmax - Pullback/vJp rule
Переглядів 4064 місяці тому
The softmax is the last layer in deep networks used for classification, but how do you backpropagate over it? What primitive rule must the automatic differentiation framework understand? Here are the note: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/adjoints_sensitivities_automatic_differentiation/rules/softmax_pullback.pdf 👉 This educational series is supported by the w...
Softmax - Pushforward/Jvp rule
Переглядів 3944 місяці тому
The softmax is a common function in machine learning to map logit values to discrete probabilities. It is often used as the final layer in a neural network applied to multinomial regression problems. Here, we derive its rule for forward-mode AD. Here are the notes: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/adjoints_sensitivities_automatic_differentiation/rules/softmax_...
Fourier Neural Operators (FNO) in JAX
Переглядів 5 тис.5 місяців тому
Neural Operators are mappings between (discretized) function spaces, like from the IC of a PDE to its solution at a later point in time. FNOs do so by employing a spectral convolution that allows for multiscale properties. Let's code a simple example in JAX: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/neural_operators/simple_FNO_in_JAX.ipynb 👉 This educational series is ...
Custom Rollout transformation in JAX (using scan)
Переглядів 5165 місяців тому
Calling a timestepper repeatedly on its own output produces a temporal trajectory. In this video, we build syntactic sugar around jax.lax.scan to get a function transformation doing exactly this very efficiently. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/jax_tutorials/rollout_transformation.ipynb 👉 This educational series is supported by the world-lea...
JAX.lax.scan tutorial (for autoregressive rollout)
Переглядів 1,5 тис.6 місяців тому
Do you still loop, append to a list, and stack as an array? That's the application for jax.lax.scan. With this command, producing trajectories becomes a blast. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/jax_tutorials/jax_lax_scan_tutorial.ipynb 👉 This educational series is supported by the world-leaders in integrating machine learning and artificial in...
Upgrade the KS solver in JAX to 2nd order
Переглядів 7056 місяців тому
The Exponential Time Differencing algorithm of order one applied to the Kuramoto-Sivashinsky equation quickly becomes unstable. Let's fix that by upgrading it to a Runge-Kutta-style second-order method. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/fft_and_spectral_methods/ks_solver_etd_and_etdrk2_in_jax.ipynb 👉 This educational series is supported by the...
Simple KS solver in JAX
Переглядів 1,8 тис.6 місяців тому
The Kuramoto-Sivashinsky is a fourth-order partial differential equation that shows highly chaotic dynamics. It has become an exciting testbed for deep learning methods in physics. Here, we will code a simple Exponential Time Differencing (ETD) solver in JAX/Python. Code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/fft_and_spectral_methods/ks_solver_etd_in_jax.ipynb 👉 Th...
np.fft.rfft for spectral derivatives in Python
Переглядів 9256 місяців тому
For real-valued inputs, the rfft saves about half of the computation over the classical fast Fourier transform. Let's use it to speed up the spectral derivative calculation. Here is the code: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/fft_and_spectral_methods/spectral_derivative_numpy_with_rfft.ipynb 👉 This educational series is supported by the world-leaders in integra...
Spectral Derivative with FFT in NumPy
Переглядів 2,1 тис.7 місяців тому
Spectral Derivative with FFT in NumPy
Physics-Informed Neural Networks in Julia
Переглядів 3 тис.8 місяців тому
Physics-Informed Neural Networks in Julia
Physics-Informed Neural Networks in JAX (with Equinox & Optax)
Переглядів 5 тис.9 місяців тому
Physics-Informed Neural Networks in JAX (with Equinox & Optax)
Neural Networks using Lux.jl and Zygote.jl Autodiff in Julia
Переглядів 1,9 тис.10 місяців тому
Neural Networks using Lux.jl and Zygote.jl Autodiff in Julia
Neural Networks in Equinox (JAX DL framework) with Optax
Переглядів 2,2 тис.10 місяців тому
Neural Networks in Equinox (JAX DL framework) with Optax
Neural Networks in pure JAX (with automatic differentiation)
Переглядів 2 тис.11 місяців тому
Neural Networks in pure JAX (with automatic differentiation)
Neural Network learns sine function in NumPy/Python with backprop from scratch
Переглядів 1,6 тис.11 місяців тому
Neural Network learns sine function in NumPy/Python with backprop from scratch
Simple reverse-mode Autodiff in Python
Переглядів 2 тис.Рік тому
Simple reverse-mode Autodiff in Python
Animating the learning process of a Neural Network
Переглядів 2,1 тис.Рік тому
Animating the learning process of a Neural Network
Neural Network learns Sine Function with custom backpropagation in Julia
Переглядів 1,2 тис.Рік тому
Neural Network learns Sine Function with custom backpropagation in Julia
Simple reverse-mode Autodiff in Julia - Computational Chain
Переглядів 716Рік тому
Simple reverse-mode Autodiff in Julia - Computational Chain
Neural ODE - Pullback/vJp/adjoint rule
Переглядів 1,2 тис.Рік тому
Neural ODE - Pullback/vJp/adjoint rule
Neural ODEs - Pushforward/Jvp rule
Переглядів 714Рік тому
Neural ODEs - Pushforward/Jvp rule
L2 Loss (Least Squares) - Pullback/vJp rule
Переглядів 284Рік тому
L2 Loss (Least Squares) - Pullback/vJp rule
L2 Loss (Least Squares) - Pushforward/Jvp rule
Переглядів 272Рік тому
L2 Loss (Least Squares) - Pushforward/Jvp rule

КОМЕНТАРІ

  • @badzieta
    @badzieta Годину тому

    Python tutorials: import tutorial tutorial.run()

  • @kanishkbhatia95
    @kanishkbhatia95 3 години тому

    Super cool as always. Some feedback to enhance clarity - when writing modules (SpectralConv1d, FNOBlock1d, FNO1d), overlaying the flowchart on the right hand side to show the block to which the code corresponds would be really helpful. I felt a bit lost in these parts.

  • @mirkoadzic8941
    @mirkoadzic8941 19 годин тому

    Thank you so much for this series! It has helped me tremendously to understand how automatic differentiation works under the hood. I was wondering if you plan to continue the series, as there are still operations you haven't covered. In particular, I am interested in how the pullback rules can be derived for the "not so mathematical" operations such as permutations, padding, and shrinking of tensors.

  • @fexvfx
    @fexvfx 2 дні тому

    Ich finds geil wie du from scratch das einf mal so dahincodest und dabei super erklärst 😅

  • @piero8284
    @piero8284 3 дні тому

    Just one observation: the functional unconstrained optimization solution gives you an unnormalized function, you said it is necessary to normalize it afterwards. So what does guarantee that this density, but normalized, will be the optimal solution?? If you normalize it, it will not be a solution to the functional derivative equation anymore. Also, the Euler-Lagrange equation gives you a critical point, how does one know if it is a local minima/maxima or a saddle point?

  • @rezaafshar3832
    @rezaafshar3832 5 днів тому

    Excellent work! Thanks a lot for sharing.

  • @alperozt
    @alperozt 6 днів тому

    Thank you. This is super helpful as an intro to fenics for fluid.

  • @kseniastern381
    @kseniastern381 7 днів тому

    Thank you for the video. Very informative and helps a lot. I have a small question: Why there is no density in the equations? In NS equation it is exist. And one bigger question: Is it possible to include the heat transfer into this code with the same scheme as pressure update, or should be there a different way?

  • @solounomas0
    @solounomas0 8 днів тому

    whait if the velocity is not constant and changes with the density(q)??

  • @jananpatel9030
    @jananpatel9030 8 днів тому

    Exam in 20 minutes, thanks haha

  • @valeriafonsecadiaz1527
    @valeriafonsecadiaz1527 8 днів тому

    For full accuracy, at 0:15, the distribution of X is actually the distribution of X given Z=z, right?

  • @valeriafonsecadiaz1527
    @valeriafonsecadiaz1527 8 днів тому

    I love you

  • @JohnPoduska
    @JohnPoduska 11 днів тому

    Thank you so much for all you great video. What IDE do you use?

  • @user-cx7ow7tu9z
    @user-cx7ow7tu9z 11 днів тому

    Hello I'm a undergraduate student from South Korea. I really appreciate your videos. It helps me a lot on understanding jax and programming. Can I know if this 1D turbulence flow has a name?

    • @MachineLearningSimulation
      @MachineLearningSimulation 8 днів тому

      You're very welcome 🤗 Thanks for the kind feedback. This dynamic is associated with the kuramoto-sivashinsky equation. I'm not too certain if we can classify it as turbulent (depends on the definition), it definitely is chaotic (in the sense of high sensitivity with respect to the initial condition)

  • @ntowebenezer986
    @ntowebenezer986 13 днів тому

    Thank you for this wonderful video. Please can you also do another one for a vertical flow for two phase liquid and gas

    • @MachineLearningSimulation
      @MachineLearningSimulation 7 днів тому

      You're very welcome 🤗 Thanks for the suggestion. It's more of a nichy topics. For now, I want to keep the videos rather general to address a larger audience

  • @diegoandrade3912
    @diegoandrade3912 14 днів тому

    This dude is always on point ... keep it coming!

  • @pravingaikwad1337
    @pravingaikwad1337 15 днів тому

    how do we know the joint dist?

    • @MachineLearningSimulation
      @MachineLearningSimulation 8 днів тому

      That refers to us having access to a routine that evaluates the DAG. Check out my follow-up video. This should answer your question: ua-cam.com/video/gV1NWMiiAEI/v-deo.html

  • @ziweiyang2673
    @ziweiyang2673 15 днів тому

    Very nice video. Truly showing the potential of Julia for sciml! I’m curious have you compared this Julia algorithm with Jax? It seems like much faster than training in Jax. However, I’m also worried about what if I need to construct mlp rather than one layer net which is most common situation in ml? How about high dimensional data rather than 1d data? Does that also increase the complexity to use Julia?

  • @pravingaikwad1337
    @pravingaikwad1337 15 днів тому

    when u say Z is exponential distribution and X is normal distribution, how do u know this? Is this an assumption?

    • @MachineLearningSimulation
      @MachineLearningSimulation 8 днів тому

      Yes, that's all a modeling assumption. Here, they are chosen because they allow for a closed-form solution.

  • @fahimbinselim7768
    @fahimbinselim7768 15 днів тому

    Thank you very much!

  • @jesusmtz29
    @jesusmtz29 16 днів тому

    Great stuff

  • @jrlearnstomath
    @jrlearnstomath 16 днів тому

    Hi this was the most epic explanation I've ever seen, thank you! My question is that at ~14:25, you swap the numerator and denominator in the first term -- why did you do this swap?

  • @lightfreak999
    @lightfreak999 16 днів тому

    Very cool video! The walkthrough write-up of this alternate program of 1D FNO is super useful for newcomers like myself :)

  • @zgushonka12
    @zgushonka12 17 днів тому

    Hi man, awesome videos! a problem, if I may. let's say i have a box or cylinder with fluid(1/2, 3/4). i need is to simulate acceleration on it, input(accelerationXYZ, dt) - return inertia feedback foces on XYZ over time. simple detalisation for 300-400fps. maybe bake center mass beheviour into ML or something. any ideas? Thanks!

  • @andywang1950
    @andywang1950 19 днів тому

    Not sure if I'm mistaken or not. In your explanation, the formula is wrong since it's actually suppose to use "ln" rather than "log" For others looking at the math part of the explanation np.log is actually doing ln or Log base e got me confused for a while.

    • @MachineLearningSimulation
      @MachineLearningSimulation 7 днів тому

      Good catch. 👍 Indeed, it has to be "ln" for the correct box-muller transform. CS people tend to always just write log 😅

  • @z4br4k98
    @z4br4k98 20 днів тому

    danke! hat bei meinem probablistic ml assignment geholfen!

  • @ricardogomes9528
    @ricardogomes9528 20 днів тому

    Great video on explaining even the math concepts, but I stood with a doubt, perhaps a stupid one: In the beggining of the video you had the blue line p(Z|D) = probability of the latent variable Z knowing D data, so events Z and D are not independent right? If I understood correctly, then, at 10:20, you say that we have the joint probability P(Z *intersect* D). I don't think I understood this: how do we know we have that intersect? Is it explained in any prior minute...? Thank you for your attention

  • @rene9901
    @rene9901 21 день тому

    Hi .. was looking for JAX tutorials .. found this video and your channel and hence must say THANKS for the nice explanations.

  • @adityamwagh
    @adityamwagh 22 дні тому

    Are you german? Sound very german! Great videos BTW!

  • @ananthakrishnank3208
    @ananthakrishnank3208 23 дні тому

    10:27 I don't think it is right. Summation is for the whole (q * p/q), and we cannot conveniently apply summation to just q alone.

  • @YOak_ML
    @YOak_ML 24 дні тому

    Great video! I think at 12:10 it should be inner_objective(gd_iterates,THETA) instead of solution_suboptimality. In this case it is still working since the analytical solution is 0.

    • @MachineLearningSimulation
      @MachineLearningSimulation 23 дні тому

      Thanks for the kind comment 😊. Good catch at 12:10. 👍 I updated the file on GitHub: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/adjoints_sensitivities_automatic_differentiation/curse_of_unrolling.ipynb

  • @aryanpandey7835
    @aryanpandey7835 24 дні тому

    please make a video series on Graph neural network

    • @MachineLearningSimulation
      @MachineLearningSimulation 23 дні тому

      Hi, thanks for the suggestion. 👍 Unfortunately, unstructured data is not my field of expertise. I want to delve into it at some point, but for now I want to stick with structured data.

  • @Machine_Learner
    @Machine_Learner 27 днів тому

    Awesome stuff. I am wondering if you can do a similar video for the new neural spectral methods paper?

    • @MachineLearningSimulation
      @MachineLearningSimulation 26 днів тому

      Thanks 🤗 That's a cool paper. I just skimmed over it. It's probably a good idea to start covering more recent papers. I'll put it on my todo list, still have some other content in the loop I want to do first, but will come back to it later. Thanks for the suggestion 👍

  • @raajchatterjee3901
    @raajchatterjee3901 27 днів тому

    It would be cool here to show analytically that the maximum likelihood estimated parameter for a Bernoulli is just the sample mean of the data :) I guess autodiff is just performing this procedure numerically?

    • @MachineLearningSimulation
      @MachineLearningSimulation 27 днів тому

      Definitely, autodiff is a bit of an overkill here :D Was more of a showcase how to do this with TFP. In case you are interested in the MLE for Bernoulli: ua-cam.com/video/nTizrDsR1x8/v-deo.html

  • @Michael-vs1mw
    @Michael-vs1mw 27 днів тому

    Really cool stuff, with a clear explanation in 1D, that seems rare. Great work, thanks!

  • @MathPhysicsFunwithGus
    @MathPhysicsFunwithGus 28 днів тому

    Amazing expectation thank you so much!

  • @fenglongsong4760
    @fenglongsong4760 28 днів тому

    Super interesting observation and super clear explanation! Thanks for sharing it with us!

  • @extendedanthamma5687
    @extendedanthamma5687 29 днів тому

    This was very useful!! Thank you so much!! Can you please make a video on how to solve helmholtz pde in BEMPP (boundary element method python package) for acoustic simulations?

    • @MachineLearningSimulation
      @MachineLearningSimulation 22 дні тому

      Thanks for the kind comment ❤️ you're very welcome. I am not familiar with neither the Helmholtz pde, Nor the bempp package. As far as I see, this Helmholtz pde essentially is a poisson problem. Maybe you find this video of mine helpful: ua-cam.com/video/O7f8B2gPzSc/v-deo.html

  • @huidongtang
    @huidongtang Місяць тому

    太清晰了

    • @MachineLearningSimulation
      @MachineLearningSimulation 27 днів тому

      Thanks for the comment 😊 I used auto translate (which didn't turn out too well). I hope you liked the video 👍

  • @derrick5594
    @derrick5594 Місяць тому

    glad i found your channel. great explanation👍

  • @user-hb5le6qt8t
    @user-hb5le6qt8t Місяць тому

    writing so ugly...

  • @_thisnameistaken
    @_thisnameistaken Місяць тому

    Do you have the code for this saved? If so, could I have it for reference on my own fluid sim?

  • @driesvanoosten4417
    @driesvanoosten4417 Місяць тому

    Can we also use fenics to solve the nonlinear heat equation, i.e. the case were the diffusivity depends on temperature?

    • @MachineLearningSimulation
      @MachineLearningSimulation 27 днів тому

      I'm pretty sure this is possible. If you check out the fenics documentation you will find a section on solving nonlinear PDEs with newtons method.

  • @user-jw8fh4hy1o
    @user-jw8fh4hy1o Місяць тому

    Challenging! Can this video solve the problem of finding dA/dθ in adjoint optimization for min J(x(θ),θ); s.t. Ax=b;😀

    • @MachineLearningSimulation
      @MachineLearningSimulation 27 днів тому

      Definitely; If you implement the rule as part of your autodiff system you can use gradient-based optimizers to solve the optimization problem 😊

  • @crapadopalese
    @crapadopalese Місяць тому

    6:00 this next step doesn't look correct to me, e.g., if i choose a specific g(u,\theta) = \theta^u, i don't think i would get a sum with these two terms. Am i missing something? The right solution of the derivation of the integrand would have been just \theta^u Your equation peoposes something else

  • @user-jw8fh4hy1o
    @user-jw8fh4hy1o Місяць тому

    Thanks so much for the video explanation! 😍 My understanding is that both Lagrangian and implicit derivative methods yield the same sensitivity analysis results, right? The Lagrangian approach is more convenient for large linear equation constrained implementations, as it requires only one forward and one backward propagation computation per iteration. In CFD, sensitivity analysis and adjoint optimization are typically implemented using adjoint equations, right?

  • @vishalkumar040393
    @vishalkumar040393 Місяць тому

    Do you use fenicsx?

  • @Rau379
    @Rau379 Місяць тому

    Hello, thank you for the tutorial. It is very helpful. Could you answer a doubt? In the pressure boundary condition, could I do the Homogeneous Neumann everywhere and specify in a node ( any node inside the domain) pressure =0? Therefore, there aren't infinite solution, but one solution with zero in node which was specified.

    • @MachineLearningSimulation
      @MachineLearningSimulation 22 дні тому

      You're very welcome. 😊 Yes, that's one way to fix the additional degree of freedom when solving pressure poisson problems.

  • @nicholasvizard162
    @nicholasvizard162 Місяць тому

    Thank you very much for these videos; they are incredible and offer the best explanations of these topics that I've found online. I particularly appreciate the format of your introductions, where you succinctly review prior knowledge and clearly define the key question that the video will explore. My understanding of the Dirichlet distribution is that it serves as a prior for probabilities, with each individual probability constrained to the [0, 1] range. I was wondering if you're aware of any methods to impose specific bounds on individual probabilities within the context of the Dirichlet distribution. For example, suppose you had prior knowledge that the probability of it being sunny could never exceed 0.8; you might want to set its range between [0, 0.8]. Of course, the sum of probabilities would still need to total 1. Is there a known approach or modification to the Dirichlet distribution that accommodates this type of constraint? Many thanks for your help!

  • @5ty717
    @5ty717 Місяць тому

    Excellent