
[ad_1]
Over the years, the art created by programs like Midjourney and OpenAI’s DALL-E has become surprisingly compelling. These programs can literally translate text prompts into (and controversially) award-winning art. As tools become more sophisticated, those signs have become a craft in their own right. And like any other craft, some creators have begun to put them up for sale.
The PromptBase Image Generator, a type of meta-art for generating distinctive imagery from the market, is at the heart of the new business in Prompt. Launched earlier this summer both intrigue and criticism, the platform lets “speedy engineers” sell text descriptions that reliably produce a certain art style or theme on a specific AI platform. When you purchase Prompt, you get a string of words that you paste into MidJourney, DLL-E, or any other system you have access to. The result (if that’s a good sign) is a variation on a visual theme such as nail art design, anime pinupeither “Futuristic Lush.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23989798/DALLE_IMAGES_dSjbwjhVU9Z0TZknLv6wbGbKWnn1_resized_1660245879174_600x600.png?ssl=1)
Signs are more complicated than a few words of description. They include keywords describing the intended aesthetic, elements important to a scene, and parentheses where buyers can add their own variables to tailor the content. Something like a nail art design can include instructions on the position of the hands, the angle of the pseudo-photographic shot, and tweaking the sign to produce different manicure styles and themes. PromptBase takes a 20 percent commission, and Speedy writers retain ownership of their work – although AI art and signs have copyright status largely unused water,
Paying $2 to $5 for a paragraph of text can seem like an odd purchase, and the idea of paid signals doesn’t fit With everyone using these systems. But after purchasing the nail art designs mentioned above, I was curious what it took to make a good commercial AI prompt—and how much money was actually involved. PromptBase contacted the designer, Justin Reckling, to talk to me about it.
The following has been condensed and lightly edited for clarity.
How and when did you get into engineering early? Did you have any special skills that made you good at it?
I got into engineering early in April of 2022 when I was able to get my hands on OpenAI’s GPT-3 text generation tool. I quickly found that I had a knack for it and was able to create some great text-to-image prompts with it. My related skills include programming and software quality assurance. Plus, I have a good eye on aesthetics which helps me create signs that are visually appealing.
Do you get into quick writing primarily from the point of view of being an artist, being a coder or engineer, or something else?
I see quick writing from the perspective of an artist, coder and engineer. I use my programming experience to help me understand how the service can interpret my signal, which guides me to more effective tinkering with it so that I can then reconcile the results. Each word at the prompt has a weight associated with it, so trying to figure out what works best and where becomes a core asset in the skill. My background in software quality assurance is a huge driver in the “what if” style of thinking. Growing up overly functional has also been a blessing of sorts. Feeling very liberating to have it as an asset now.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23991466/1661286830670_600x600.png?ssl=1)
How many sell signals do you sell in a typical day/week? Do you know what people buy them for?
I typically sell between three and five signals per day, with each signal making an average of two to three sales within a month or two. I currently have a list of 50 signals, with new ones being added regularly. Most of the signs sold appear to be for pleasure rather than commercial purposes.
How do you decide what you are going to make and sell? Is it based more on your personal interests or on demand in the community?
It is a mix of both individual interests and demand from the community. I want to create things that people find helpful and inspiring, and it’s great when those two things overlap. I also keep track of what is selling well so that I can understand the needs of the community and continue to provide what they are looking for. I use the “Most Popular Signs” carousel list on the main page. We’ll be getting our hands on some vendor-specific metrics soon.
What is your most popular sign?
block city Highest sales. my highest view vs buy signal should be mine T-shirt Product Shots,
How do you start building a prompt?
Having a rough idea of what I want to achieve, I try to narrow things down to people, places, and things – the lead actor or the main driver of the scene I’m trying to create. I use this service to generate some rough signals to get a feel for what the scene might look like. I find it much easier to take something that works well and then add to it rather than going back and removing things until things look better. You start with big key strokes and then work into the finer details.
How much research do you put into what you are trying to generate? For example, if you’re making nail art, do you have to learn things like nail terminology and favorite hand poses, or are you going by intuition?
I do a fair amount of research for every text-to-image prompt I create. I start by asking GPT-3 subject matter questions to help me better understand the scene I’m trying to create. For example, if I’m making a prompt about someone getting a manicure, I might ask, “Someone is getting a manicure; Explain what you’re looking for.” This allows me to get more specific details from an expert rather than relying on articles or other sources of information that may not be accurate.
Are there any special skills or tricks you’ve learned as you’ve worked that make it easier to point?
When creating a text-to-image prompt, it can be helpful to use quotations to separate the main ideas. In addition, it can be helpful to be familiar with terms such as “hyper-realistic,” “macro photography,” “octane render,” “hyper-detail,” “cinematic lighting,” “long shot,” “middle shot,” “short shot.” e.t.c. This will give you a better understanding of how to add depth and detail to your signals and will also help you control distance and focus. For example, you can add the phrases “cinematic lighting” and “golden hour” to the end of the sign above to create a more sophisticated and distinctive image.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23991471/DALLE_IMAGES_dSjbwjhVU9Z0TZknLv6wbGbKWnn1_resized_1660168941621_600x600.png?ssl=1)
Your visual work seems to be mostly DALL-E based, but how different is the accelerated build process to other systems such as MidJourney?
It really depends on what you’re looking for and what you need the pointers to do. If you want something that’s more polished and professional, like stock image replacement, then probably DALL-E is your best bet. However, if you’re looking for something a little more creative and practical, mid-trip might be a better option. With MidJourney, you can adjust the weight of words, decide what resolution you want, and make other customizations. But keep in mind that it takes more time and effort to get the desired result.
What happens if you adjust the weight of the words?
Weight gain increases the strength of the “taste” of that word, so there is a greater chance that it will manifest in a more noticeable way. On the contrary, you can also reduce weight according to the need. You do this by adding two colons and a number. Each word has a weight of 1 out of the gate, “Hot Dog :: 1.5” multiplying the dog’s weight by 1.5, where 0.5 will cut it in half.
So reducing the weight of the “dog” would make it more likely that you would get food instead of an actual dog?
That’s right, and raising it can give you a very attractive dog or one that is just looking for a drink of water.
On a side note, I enjoy MidJourney quite a bit. I think most of my signals will be midjourney based, but until recently, only DALL-E signals were accepted by PromptBase, so I spent most of my effort here.
It’s also worth noting that there is a text-to-image generator called Stable Diffusion that you can run locally on your computer. However, you need a fairly powerful video card to run the model, so it’s not as widely accessible as it could be. I believe that in the long run, locally-run models that are free from restrictions will eventually overtake the big players in the market. I’ve been experimenting a lot with it lately.
The ability to tinker with your signs without spending a lot of money is a big draw for me. Right now, I have to spend $10 to $15 in credits for each sign I create in order to get the results I want.
Comparing this to the numbers before, it looks like you’re spending more in sales on each sign than you should be.
Yes, I need to sell around 5 to 10 of the given prompt for break even. Some of them don’t take long to generate, and as I get better at finding text to reuse between signs, I’ll need fewer variations to reach my end goal. Investing in this technology is worthwhile in the long run, as interest in its use cases continues to grow. I’m also learning skills that I can apply to other models, so I don’t think it’s a big drawback at the moment.
It also sheds some light on the value of signals. There are many people out there who criticize what I am doing, but most of the time, they only see the end result and put no effort behind reaching that final destination. This is backwards for them. Of course, anyone can type those words, but can you figure out how to get manicured hands in a consistent posture at the first sign? The continuum of exceptional results of signals is also a great source of value.
Even if the monetary cost of this discovery is reduced, a certain amount of time and effort went into the final words in a sign that will always have value.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23991478/1661034117028_600x600.png?ssl=1)
How do you think about ownership of your work? Do you have an idea whether your signs are protected by copyright, and how much do you care?
I don’t think too much about ownership of my work – I just try to create something I’m proud of and that others will enjoy. As far as copyright protection is concerned, I’m not too concerned about it as I get paid to disclose my work. I think our society should provide a social safety net like universal basic income to help those in the creative sector who are struggling financially. This will become increasingly important as automation continues to impact a variety of businesses.
I noticed that you did some GPT-3 text prompts as well. Can you write AI Text Prompt that will automatically generate AI Art Prompt?
I have a trained model in OpenAI that I have just been allowed to share which is available here typestitch.com, It’s been trained on data largely from real-world cues, so it can take a keyword or two and generate sample cues to try out for fun or to give you some concept ideas.
I use the model every day to help get the creative juices flowing or, at the end of the day, to come up with some random craziness to share with friends. It’s never been to the point where I’ve wound up selling a prompt that has arisen, though. The spectator needs are still granular enough to reliably generate a favorable signal right out of the gate. But with enough examples, a model can give you lots of new and weird ideas to enjoy playing.
[ad_2]
Source link
[ad_2]
#Professional #whisperers #launched #marketplace #DALLE #prompts
Most Reliable Software Company in Kolkata , West Bengal , India
#itservicesinkolkata #bestsoftwarecompanyinkolkata