Oh I'm one of those science nerds....Wrong crowd I see… I meant it as a joke. But thanks for the info and explanation @ricepr!
Oh I'm one of those science nerds....Wrong crowd I see… I meant it as a joke. But thanks for the info and explanation @ricepr!
All good my man!Oh I'm one of those science nerds....
It's already out & being developed more or less "in public"...have a look!will be open-sourced soon
So this might be a good excuse to explain my background & choice of name. My most recent line of work has been in ML research. My interests are mainly in developing new approaches to fuse ML/AI with natural science applications. The "neural" prefix comes from a now-famous work called Neural Ordinary Differential Equations from which there are literally thousands of follow-on works, but it was relatively new when I first did version 0.1:It’s an open source plug-in named Neural? Good luck with that…
NDSP has published a paper that is likely very similar to the modeling they used; I have a few guesses what the QC does for captures, and it's not quite what I'm doing here.This thing trains a neural network given an amp input... which is pretty much what the QC reportedly does for captures.
Nope--all mineIs this code "borrowed" I wonder?
This is very true. My "go-to" dataset I created has a bunch of different guitars, pickups, playing articulations & dynamics, chords, single notes, etc basically going chromatically up and down the fretboard. Fortunately, data is super-cheap for this--record for a couple minutes and you get millions of data points!This stuff lives and dies on its dataset, fundamentally.
Let me know how it goes...I haven't gotten the chance to try one yet myselfI'm gonna check this out on my new Macbook Pro with the M1 Pro chip
Soon!Can this handle changes to various knobs on the "amp" or "pedal"?
Oh, it does that fine. The data has me playing over the clean-to-crunch dynamic transition, so the model learns it. What's really crazy is if you juice the input and hit it with a huge signal. I put a "x100" on the input of the Deluxe Reverb model in the iPlug repo and the resulting overdrive sounded amazing (and frankly amazed me that it extrapolated to that kind of signal so well!) Can't attest that it's exactly like the real thing though because I don't have any 20dB clean boosts hanging around!Does not need to handle the clean to crunch boundary, but ....
There's a lot of ways to do this. One common way is to instead of having a single number be the input (your mono DI), make it a list of number where the other numbers are the knob settings. But there are other ways as well. What you don't want to do though is take the (weighted) average of several models because then you've got N times more CPU to run it.Treating each capture as a collection of captures multiplexed with additional controls would be trivial - just an organizational layer, really. Interpolation between settings would really bring it home
That's the thing--there...isn't any, really. I started this (and am still doing it) as a for-fun project, and decided to basically start with the simplest approach I could think of--things anyone could have tried in the early 2010's--and it...just worked! With another minor tweak (using a ConvNet), it basically gave me the results in the videos. Combined with how nice modern ML stuff like PyTorch is, it's super-accessible to do this now.I don't see any "secret sauce" here.
Here ya go (open the .sln file in Visual Studio; don't have a Mac to experiment w/)I tried going through the plugin repository but had a hard time finding the actual plugin's code rather than just the framework around it that it uses to make a functional plugin.
I've now got an RTX 3070 Ti, but did these w/ my 2070. But you'd be fine with a bottom-barrel GPU on the cloud--this is a tiny model (by necessity for real-time!) so you really don't need anything modern for this.it can take several hours to generate the models, using a GPU which is very suited for these sort of tasks (and might even have specialized hardware for it like Nvidia RTX GPUs do).
Almost certainly--there's a paper that NDSP published a few years back (full disclosure: I read it--but only after I had my "solution" just to see for fun how ours compared. Theirs is a bit more complex...but when I coded theirs up, it did worse...) and others have done a straight-code of the paper (e.g. https://github.com/GuitarML/SmartGuitarAmp)I expect this sort of stuff is behind the boom in new modeler vendors (plugin or hw) in the past 5 years or so.
This is basically true; I'd quibble that it's not a "short cut"--it's just not necessary to go through that trouble with this application because the data are dirt-cheap. (I use more elaborate approaches in my day job, but you basically always start with the "straightforward" answer in case it works.) In the past, things like SPICE were the main toolkit, and they're still useful, but this is just another "tool" to model.You can shortcut a lot of stuff using neural networks when you don't have to painstakingly measure and model every single component to replicate in the virtual world.
I'll have a video for this soonThe complication with this stuff comes when you want to produce a device or plugin that is also operable like an amp where you want its tonestack (EQ) to react like the real thing, have its gain controls work the same etc. I would not be surprised if some plugins basically interpolate between different models or have some single trained model built based on various control settings.
In short: yep. I don't personally buy the "feel" thing--either it's the same wave form or it's not, and this is...really close (see the "mono test" in the QC comparison). I got below my own threshold of perception back w/ v0.1, so I've honestly just been flying by numbers since.but does it feel like playing through the source setup?
I've got an idea I'm pretty confident will work (quite quickly, though I'm going to avoid claiming how fast until I do it & I'm sure so I don't eat my words). But "fast" fits may fall under the "special sauce" categoryI recall Doug Castro mentioning their first iterations of "Neural Capture" took hours of training on a powerful desktop PC.
True--my brain has been completely smoothed out from PythonIf my country self can do Python, I'm confident you'll have absolutely zero trouble with it.
The last vid shows it w/ my Deluxe ReverbHow does it do with clean and slightly broken up sounds?
There's always one SciML-er around these days...hey there! In this application, you might not get much speedup from Julia--you can still vectorize & keep the GPU fed well enough with big minibatches w/ PyTorch (whose kernel calls are async) that it was worth the easier development (IMO) to keep to Python...but Julia's great for shoving MLPs into classical computation for sureLikely a speedup using Julia instead of python based.
Actually, that might not be true here--guitar amps are pretty simple circuits (though I don't do RT-SPICE stuff myself so I couldn't say for certain...might have if I picked this up a decade ago!)Many fewer operations
Basically, yep. I run the signal through the amp, export the WAV, point the trainer at the new file, then hit go and grab it in the morningHow does the modeling code work for neural? I imagine you don't have to do all those steps.
In layman's terms, a neural network is trained on a set of input data and expected output results. The more (different) data you feed in, the better the training is going to be.I code/experiment a lot with tube amp modeling for embedded and VST, but I have no experience with neural stuff or ML. In traditional DSP, you basically do virtual modelling of the analog circuit. You program for various tube characteristics, DC offset, phase, filters, etc etc... Pretty long and tedious process (at least for me), but a lot of fun.
How does the modeling code work for neural? I imagine you don't have to do all those steps.
Hi, everyone! I'm sdatkinson on GitHub/RunawayThumbtack on YouTube, the creator of NAM.
I thought I'd create an account & drop in on this thread to add to some of the discussion & answer a few of the questions/comments.
It's open source, so it's gonna be up to somebody who knows how to make such a program to utilize it like that.Hi mate welcome to TGP. Any plans for releasing it as a VST so that we non-coder guitarists can enjoy it too?
Hi, everyone! I'm sdatkinson on GitHub/RunawayThumbtack on YouTube, the creator of NAM.
I thought I'd create an account & drop in on this thread to add to some of the discussion & answer a few of the questions/comments.
(...)
Have you tried running it on iOS or raspberry pi?Hi, everyone! I'm sdatkinson on GitHub/RunawayThumbtack on YouTube, the creator of NAM.
Hi, everyone! I'm sdatkinson on GitHub/RunawayThumbtack on YouTube, the creator of NAM.
I thought I'd create an account & drop in on this thread to add to some of the discussion & answer a few of the questions/comments.
@chameleon101663 thanks for sharing & thanks for the kind words!
It's already out & being developed more or less "in public"...have a look!
@phil_m
- https://github.com/sdatkinson/neural-amp-modeler - the training code
- https://github.com/sdatkinson/iPlug2 <- puts it into a plugin. You'll specifically want to open the solution under `Examples/NAM` (this is my first plugin, so I just took one of the examples and made the changes I needed to get the net in there & minimal anything else)
So this might be a good excuse to explain my background & choice of name. My most recent line of work has been in ML research. My interests are mainly in developing new approaches to fuse ML/AI with natural science applications. The "neural" prefix comes from a now-famous work called Neural Ordinary Differential Equations from which there are literally thousands of follow-on works, but it was relatively new when I first did version 0.1:
Since then, prefixing things with "neural" has become somewhat of a trope in my line of work when you're representing some unknown (e.g. an amp) with a neural net and fitting it to data. If anything, the name was meant as a play on "NAMM"
@PLysander
NDSP has published a paper that is likely very similar to the modeling they used; I have a few guesses what the QC does for captures, and it's not quite what I'm doing here.
@tjontheroad
Nope--all mine
@Orvillain
This is very true. My "go-to" dataset I created has a bunch of different guitars, pickups, playing articulations & dynamics, chords, single notes, etc basically going chromatically up and down the fretboard. Fortunately, data is super-cheap for this--record for a couple minutes and you get millions of data points!
Let me know how it goes...I haven't gotten the chance to try one yet myself
@Quad4
Soon!The training code will support it in the next version...It's already merged into the development branch. I need a weekend here & I'll get the plugin side implemented too
Oh, it does that fine. The data has me playing over the clean-to-crunch dynamic transition, so the model learns it. What's really crazy is if you juice the input and hit it with a huge signal. I put a "x100" on the input of the Deluxe Reverb model in the iPlug repo and the resulting overdrive sounded amazing (and frankly amazed me that it extrapolated to that kind of signal so well!) Can't attest that it's exactly like the real thing though because I don't have any 20dB clean boosts hanging around!
@mbenigni
There's a lot of ways to do this. One common way is to instead of having a single number be the input (your mono DI), make it a list of number where the other numbers are the knob settings. But there are other ways as well. What you don't want to do though is take the (weighted) average of several models because then you've got N times more CPU to run it.
@LaXu - thanks for the very detailed thoughts!
That's the thing--there...isn't any, really. I started this (and am still doing it) as a for-fun project, and decided to basically start with the simplest approach I could think of--things anyone could have tried in the early 2010's--and it...just worked! With another minor tweak (using a ConvNet), it basically gave me the results in the videos. Combined with how nice modern ML stuff like PyTorch is, it's super-accessible to do this now.
Here ya go (open the .sln file in Visual Studio; don't have a Mac to experiment w/)
I've now got an RTX 3070 Ti, but did these w/ my 2070. But you'd be fine with a bottom-barrel GPU on the cloud--this is a tiny model (by necessity for real-time!) so you really don't need anything modern for this.
Almost certainly--there's a paper that NDSP published a few years back (full disclosure: I read it--but only after I had my "solution" just to see for fun how ours compared. Theirs is a bit more complex...but when I coded theirs up, it did worse...) and others have done a straight-code of the paper (e.g. https://github.com/GuitarML/SmartGuitarAmp)
This is basically true; I'd quibble that it's not a "short cut"--it's just not necessary to go through that trouble with this application because the data are dirt-cheap. (I use more elaborate approaches in my day job, but you basically always start with the "straightforward" answer in case it works.) In the past, things like SPICE were the main toolkit, and they're still useful, but this is just another "tool" to model.
I'll have a video for this soon
In short: yep. I don't personally buy the "feel" thing--either it's the same wave form or it's not, and this is...really close (see the "mono test" in the QC comparison). I got below my own threshold of perception back w/ v0.1, so I've honestly just been flying by numbers since.
I've got an idea I'm pretty confident will work (quite quickly, though I'm going to avoid claiming how fast until I do it & I'm sure so I don't eat my words). But "fast" fits may fall under the "special sauce" category
@Fatty McAtty
True--my brain has been completely smoothed out from Python
@Calaban
The last vid shows it w/ my Deluxe Reverb
@ricepr
There's always one SciML-er around these days...hey there! In this application, you might not get much speedup from Julia--you can still vectorize & keep the GPU fed well enough with big minibatches w/ PyTorch (whose kernel calls are async) that it was worth the easier development (IMO) to keep to Python...but Julia's great for shoving MLPs into classical computation for sure(Still had to do the plugin in C++ anyways.)
Actually, that might not be true here--guitar amps are pretty simple circuits (though I don't do RT-SPICE stuff myself so I couldn't say for certain...might have if I picked this up a decade ago!)
@aro
Basically, yep. I run the signal through the amp, export the WAV, point the trainer at the new file, then hit go and grab it in the morning![]()
This is basically true; I'd quibble that it's not a "short cut"--it's just not necessary to go through that trouble with this application because the data are dirt-cheap. (I use more elaborate approaches in my day job, but you basically always start with the "straightforward" answer in case it works.) In the past, things like SPICE were the main toolkit, and they're still useful, but this is just another "tool" to model.
I'm genuinely wondering... Why does anyone still use the "traditional" DSP approach when creating plug-ins or embedded modeled amps? What are the shortcomings of the neural approach, if any?
From what I see, plug in companies are still hiring DSP engineers. Why bother, why not ML instead?
The last vid shows it w/ my Deluxe Reverb![]()
It’s the first video in the first post of this thread.I’m having trouble finding clean tones or a Deluxe in any of the videos.
Maybe I’m old and don’t know how to work the interwebz like you kids.
Would you please post the video again?