Hello,
I have a suggestion regarding the RM Noise software. I understand that the current focus is on building a network of servers worldwide for low-latency audio processing, which requires significant resources and funding. However, not everyone has the capability to contribute on such a large scale.
My proposal is to provide a smaller-scale, GPU-based local version of the software for individuals who may have mid-range GPUs and want to participate. In return, those who opt to use this local version would agree to make it accessible 24/7 to a small number of other users in their region (an individual study of each GPU would be necessary to quantify this number of users). If they fail to meet this commitment for a certain period or other metric, their access to the local tool would be revoked.
This approach could create a more distributed network, with users helping one another on a smaller scale, while still contributing to the overall project. It might also attract more users willing to participate, given the manageable commitment involved. Let´s say in a kind of small pools (almost a toorent network, but nothing like that regard the spliting of processing, just the example of the distribution).
What do you think of this idea?
Local GPU-Based Version with Limited User Sharing - Small Pools
Re: Local GPU-Based Version with Limited User Sharing - Small Pools
IvanKuler,
Thank you for your interest. There are many things to like about your plan and it's possible.
- The upside of your plan is [at least] better latency to more users.
- The downside is that I have to manage more servers, where each server requires *some* periodic maintenance. Moreover, the RM Noise service is sensitive to packet loss, and I spend time and energy monitoring and troubleshooting packet loss .. per server/network.
Randy Williams
Thank you for your interest. There are many things to like about your plan and it's possible.
The only permutation of this idea that I like is: a dedicated PC, where I was the only userindividuals who may have mid-range GPUs and want to participate. In return, those who opt to use this local version would agree to make it accessible 24/7
- The upside of your plan is [at least] better latency to more users.
- The downside is that I have to manage more servers, where each server requires *some* periodic maintenance. Moreover, the RM Noise service is sensitive to packet loss, and I spend time and energy monitoring and troubleshooting packet loss .. per server/network.
I'm hoping that around 6 more well-placed servers will do the job. The PCs are at a high-end gaming price-point, so it isn't outrageous.a network of servers worldwide for low-latency audio processing, which requires significant resources and funding
Randy Williams
Re: Local GPU-Based Version with Limited User Sharing - Small Pools
Randy
In response to Ivan Kuler's suggestion, you mention the single user dedicated PC as a better idea than a smaller server.
What level of PC/GPU would work for a single user RM noise setup?
Living in the city, I have quite an assortment of different noises, so training capability would be a plus.
I look forward to doing the full blown 4090 server when RTX prices come down to earth.
Regards, Ted Robinson, K1QAR
In response to Ivan Kuler's suggestion, you mention the single user dedicated PC as a better idea than a smaller server.
What level of PC/GPU would work for a single user RM noise setup?
Living in the city, I have quite an assortment of different noises, so training capability would be a plus.
I look forward to doing the full blown 4090 server when RTX prices come down to earth.
Regards, Ted Robinson, K1QAR
Re: Local GPU-Based Version with Limited User Sharing - Small Pools
Ted,
To be clear, my point was that a smaller number of more powerful servers will be easier for me to manage than a larger number of less powerful servers.In response to Ivan Kuler's suggestion, you mention the single user dedicated PC as a better idea than a smaller server.
randyw wrote:a dedicated PC, where I was the only user
I meant that *if* I were to deploy on less powerful servers, then I would want the server to be dedicated and not used for other purposes by the local user.K1QAR wrote:What level of PC/GPU would work for a single user RM noise setup?
The training is done offline and isn't related to hosting a server.Living in the city, I have quite an assortment of different noises, so training capability would be a plus.
Re: Local GPU-Based Version with Limited User Sharing - Small Pools
Randy, what a shame, from my side...
I was waiting for some sort of notification via email, but I didn’t receive anything. I just came to the forum to check on another matter and noticed there were replies to my proposal. I apologize for the delayed response and appreciate your patience.
I was very happy with your feedback and completely understand the concerns you raised, especially regarding the maintenance and management of multiple servers. If there’s any way I can assist with these aspects, or help make the idea of having local users in the proposed format more feasible, please count on me. I’m more than willing to help in any way I can.
At the same time, I’m working with a DX group here in Brazil to spread the word and expand the use of RM Noise. A few people have started using the tool and are quite pleased with it. I’m also trying to organize an initiative to potentially raise funds within the group to support a local server with a more robust setup, capable of serving more users. If this idea progresses, I’ll reach out via email to discuss the details.
In the meantime, if you could share your thoughts on the initial idea or suggest any ways to make the local setup for smaller users more viable, I would greatly appreciate it. I believe we can find a way for both approaches to coexist.
Thank you so much for your attention and support!
I was waiting for some sort of notification via email, but I didn’t receive anything. I just came to the forum to check on another matter and noticed there were replies to my proposal. I apologize for the delayed response and appreciate your patience.
I was very happy with your feedback and completely understand the concerns you raised, especially regarding the maintenance and management of multiple servers. If there’s any way I can assist with these aspects, or help make the idea of having local users in the proposed format more feasible, please count on me. I’m more than willing to help in any way I can.
At the same time, I’m working with a DX group here in Brazil to spread the word and expand the use of RM Noise. A few people have started using the tool and are quite pleased with it. I’m also trying to organize an initiative to potentially raise funds within the group to support a local server with a more robust setup, capable of serving more users. If this idea progresses, I’ll reach out via email to discuss the details.
In the meantime, if you could share your thoughts on the initial idea or suggest any ways to make the local setup for smaller users more viable, I would greatly appreciate it. I believe we can find a way for both approaches to coexist.
Thank you so much for your attention and support!
Re: Local GPU-Based Version with Limited User Sharing - Small Pools
IvanKuler,
Hmm. Let me know if you don't receive the notification from this post.I was waiting for some sort of notification via email, but I didn’t receive anything.
It probably won't affect Brazil users, but I hope to make a related announcement soon!RandyW wrote:I'm hoping that around 6 more well-placed servers will do the job.
Re: Local GPU-Based Version with Limited User Sharing - Small Pools
Hmmm. So, exclusive access, packet loss. If the protocol rapidly noted that the server was losing packets and could (a) shift to another server and (b) report the packet loss, in a reasonable way to a central point. then you could look for commonality easily.. If I try every possible server and lose packets with all of them, problem is likely on their end, like they are on a cellular hotspot or something. Conversely if people are moving off one server and not having other problems, then you can look at that server.
I find the latency reasonable but this is a tremendous tool. I can hunt parks that are lost in the noise. But the future has to be to let people run their own servers and maintain them themselves. Hams are out there with $20k rigs, they would easily drop 6k for a system with a graphics card.
I envision a setup where you supply an image and people install it, and the join is automatic when they start the server. You could require that a server "check in" and allow users in, with a provision for the net being down for a while.
But it could be like TOR where everything is discovery.
And, of course, when you version, old versions need to be replaced. By the maintainer of the node, not you except for your core servers.
I'm just musing. I used to actually do stuff like this.
Other than having to code the stuff, is there a problem that locks you in to the Personally maintain vs package model?
I find the latency reasonable but this is a tremendous tool. I can hunt parks that are lost in the noise. But the future has to be to let people run their own servers and maintain them themselves. Hams are out there with $20k rigs, they would easily drop 6k for a system with a graphics card.
I envision a setup where you supply an image and people install it, and the join is automatic when they start the server. You could require that a server "check in" and allow users in, with a provision for the net being down for a while.
But it could be like TOR where everything is discovery.
And, of course, when you version, old versions need to be replaced. By the maintainer of the node, not you except for your core servers.
I'm just musing. I used to actually do stuff like this.
Other than having to code the stuff, is there a problem that locks you in to the Personally maintain vs package model?