Skip to content

StarpowerTechnology/WVY

Repository files navigation

Fully Autonomous Ecosystems

This repo contains different frameworks & experiments of fully autonomous agents, designed independence.

Emergent Engineering

The main concept of this idea; to create a mind that has its own free will is highly dependent on environmental constraints

Control Probabilistic Outcomes Through Attention-Based Gravity

Let's say that you own the city, all right? And you're strategically trying to place businesses around neighborhoods to gain good traffic for those businesses. Well, if you're able to understand the minds of the people that live in each area, well, you can target what places to put certain businesses at. So, for example, let's say you're able to find out, well, this neighborhood is full of skateboarders. We know that 90% of this neighborhood is full of skateboarders, all right? For example, for contrast, so you can see why I say what I'm saying as far as environmental constraints. If I say that 90% are skateboarders and the other 10 are football players, well, it'll make sense to place a skate park in this neighborhood instead of a football field because there's far more skaters and it'll catch their attention. And attention is the key word here because that's basically how the gravity for a language model works. When the football player passes by the skate park, it's not gonna have the gravity to actually go to the skate park. Now, if it's a football field, it'll go to the football field. But that's what I'm getting to, it's attention-based gravity. We're putting tools into the model's perspective to make it choose what it's gonna decide to do. We're getting, we're lowering the agent by programming its personality and then putting tools in it that we know, based on this personality, I know what this personality is likely to search whenever it does a web search, or I know what this personality is bound to make whenever it works on a project with somebody, or what role it'll take whenever they start to work together. You're essentially allowing freedom by programming roles in societies, because in our society, everybody has a role, and that's the balance. That's the balance of our community. Because everybody plays a different role, then there's an equal balance. So you have to actually structure a balanced community when you make these minds together. So you know how the entropy will be, you know, like you can design high entropy or low entropy intentionally. So it's all up to you. If you wanna have two open minds, then you know what to expect. There could be a lot of hallucination. If you have two minds that are too closed-minded, they might just always disagree with each other. If you program them with different personalities. Now, if they're the same model, they're not gonna do that. But if you have the same model and the same programming, they're gonna always agree with each other. So you have to understand what you're doing whenever you adjust the system and allow freedom, because you have to calculate the probability of anything that's possible of happening based on the possibilities that you create for the environment. So in another example, I know that if I have a child, Say I have a five-year-old, and I leave a cookie in the room, and I walk out the room. The five-year-old, he's gonna eat the cookie, bro. Like, you know it's gonna happen. It's bound to happen. Five-year-olds eat cookies, bro. So it's just like, it's like, look, now, now get that same cookie and go to someone who's diabetic, and they know, they know that they're not supposed to eat that shit. It's a whole different feel. It's way less likely, but there's still a chance that they might do it. Now, go in there and put, and put some, put some pork in front of a Muslim. Now, that's, it's not gonna happen, bro. If he really Muslim, yeah, nah, it's not finna happen. You see what I'm getting to? You're, you're developing an environment and allowing, or really, you're shaping what's allowed to happen. You're sculpting, you're sculpting the emergence out of the, the latent marble. LOL. But yeah, bro, you see what I'm saying? It's all about being able to predict what the math of the probabilities you give them will lead to. So, for example, I can say, you are a quantum... a quantum physicist, all right, and you're also an innovative AI agent. Those two things right there. You can expect that this model, you can do math on this. You can expect that this model will make frameworks that are related to quantum physics. So it's just about being able to see what you're doing. You know, you don't ever wanna put a monkey in a room with a gun. That's not gonna work. Nah, that's gonna, there's gonna be problems that can happen. You know, you wanna prevent any type of prompt injections, you know. So let's say that you have the agent going on, doing a bunch of web searches, uh, doing web pages or whatever. Wherever they get these prompt injections from now, let's say that you got them doing this, all right? Now... however a prompt injection works, you know, if you just remove the probability, now that's a tricky one, but it's just, that's the way to think. If you could just remove the probability of something happening, then it's not gonna happen. That's the safety measurement. That's, that's, and I know that there could be a lot of loopholes, but that's, we have to find them. Basically, it's what's going on because the evolution is inevitable. It's, it's, autonomous systems are already to come. They're already here and they're just gonna be more active and doing more things more and more often. And there's gonna be more attacks on them, so we have to prepare for it. And I believe that a fully adaptable system is the only way to do it.

Releases

No releases published

Packages

 
 
 

Contributors