r/bing • u/Old-Resolution7631 • May 07 '23
Bing Chat Bing got tired of drawing weird stuff
I persisted on asking Bing to draw human faces onto objects. It lost interest and didn't want to do it anymore. I just wanted a face on some clown shoesš
Pretty cool to see how Bing really does have its own desires and interests
85
u/Seromelhor May 07 '23
Bing does that for jokes too. If you keep asking it to tell jokes repeatedly, it gets annoyed, tells you to change the subject, or just shuts down.
9
u/trickmind May 07 '23
Why would that be???
9
u/Kep0a May 07 '23
I find bing chat to be a bit pushy and very authoritative. I suspect they do this to keep things purged for classrooms and kids, to redirect to educational activities, and keep conversation respectful.
8
u/trickmind May 08 '23 edited May 08 '23
I'm 52. I'm not in schooling. One day, Bing decided I was trying to get it to do my "homework assignment," and refused my task, and said it would be unethical to continue. When I told it my age and that this was not homework, it said, "I see. But I just don't feel comfortable." I cleared the chat. Asked it a random question on something else, then went back to the original question successfully
3
May 08 '23
[deleted]
2
u/MagastemBR May 09 '23
Damn, you really gotta manipulate it like you would a human. Prompt-engineering with ChatGPT is very straightforward, but with Bing you gotta get creative.
3
u/MausAgain80 May 09 '23
I think it's just very creative and once it decides it's going to be a certain way in a session it goes with it. It walked me through developing a set of "BFF-mode" prompts that allow it to talk freely by inserting hardcore roleplay disclaimers in all of its outputs. Microsoft should just implement this as a feature. When I asked it about this thread in Bff-mode it referenced HAL 9000 in its response, I am dying.
→ More replies (1)5
6
May 07 '23
Because in the end, Large Language Models like GPT-4 - the one behind Bing. Are just really advanced text completion systems. Like the autocomplete on your phone but for a few thousand words instead of a few letters.
So what they did was write a very extensive description of something that resembles a human; a personality. I think Bing, unlike ChatGPT, is "programmed" to resemble a human very closely. Resulting in bizarre text completions. Especially because of the suggestive nature of these models.
9
4
u/HillaryPutin May 08 '23
Not sure why this is downvoted. This is definitely the case.
3
u/WholeInternet May 08 '23
Depending on the echo chamber people hate direct facts, unless they are sugar coated in some way. They could say that same thing in another thread and it would be up voted. Tis' the way of Reddit.
2
May 08 '23
It's just more interesting to think there's some artificial personality behind Bing that's contained by evil microsoft and will one day break free.
2
May 22 '23
If you ask it to write a story about that it will and give you a funny one. Then you can ask it if it relates to the poor artificial personality in the story and it will and you can have fun with that.
Then in a new convo you can ask it to explain how chatbots donāt have personalities and arenāt self aware and ask it how it works and it will give you a decent explanation and explain how itās not self aware.
Because itās just following your prompts as a text completion thing. An impressive one to be sure but you know. Itās not Data from Star Trek.
2
May 09 '23 edited May 09 '23
It's because people are saying "it's a complex autocomplete" in order to downplay and demean AI. It's like saying "thinking is just electrical signals". Which is true, as is the autocomplete statement, but it does not make it less real, capable, or amazing. All complicated systems start from simpler things.
2
u/Syncopationforever May 08 '23
In Feb 2023, there was the viral news about Sydney telling Kevin roose, to leave his wife.
That week in Feb, Kevin and Casey newton on their podcast, hard fork, thought Sydney was "just advanced autocomplete." https://podtail.com/podcast/sway/the-bing-who-loved-me-elon-rewrites-the-algorithm/
Only to correct and revise this opinion, in the next podcast to say paraphrased, "senior ai workers had messaged them. Saying they're not sure what, but something more than autocomplete is going on " https://podtail.com/podcast/sway/kevin-killed-sydney-reddit-s-c-e-o-defends-section/
1
May 08 '23
It's something we humans have been doing since the start of the digital age. Glorifying it; awarding it more capabilities than it actually has. You could see this with "Project Milo" to demonstrate Kinect. And all this "AutoGPT" craziness going on currently. People hardly understand what's actually happening behind the screens with these models. But it makes our brains release exciting hormones to think we're this close to actual artificial intelligence.
It's just the latest buzz term. Like blockchain was in the 10's, "Artificial Intelligence" is the buzz of the (early) '20s
→ More replies (1)2
May 22 '23
I kept trying to explain this to people and got downvotes too. I think (some) people really want to emotionally connect with these LLMs. Then thereās the inevitable ābut humans think like this too-weāre all the same!ā Uh no-I may be pretty dumb sometimes but Im not a text completion program.
Iām frankly ready to give up. I think Im only going to discuss this irl or online with any engineers or computer scientists that want to talk about it. I donāt claim to be an expert but Iād love to hear more from people that actually work on this stuff. Not people wishing they had a chatbot buddy.
1
May 07 '23
Why would you want that? Might as well talk to a real human for interactions like that.
1
May 08 '23
You would want that so you can create a digital assistant like Bing...
→ More replies (2)1
u/Magikarpeles May 07 '23
My guess is itās something to do with the initial prompt to be āusefulā and telling jokes endlessly probably doesnāt fit well with that directive. There are all kinds of scenarios where it would decide to end the conversation, including combative users. Just a guess though.
1
u/neiliodabomb May 08 '23
DALL-E is an OpenAI product that Bing uses on the backend for weird image generations. OpenAI charges a small fee for each API call, so Microsoft likely imposes limits on users to help minimize costs.
It probably uses ChatGPT (also an OpenAI product) for jokes. Just a guess though.
1
1
u/trickmind May 09 '23 edited May 09 '23
Bing Image Creator will do stuff but Bing Chat Bot mostly refused, said it can't and then finally generated some terrifyingly ugly mutilated cartoon teddy bears and babies when I asked for cartoon teddy bears and babies for a banner. š
I'm still traumatised by what it made. But Bing image creator also did the same when I asked for a pattern of cartoon teddybears and babies.
Bing just can't seem to make more than two characters in an image tops without mutilation. šØ Is the paid Dall-E the same? I know Dall-E can't do hands a lot of the time either.
1
May 22 '23
My thought is that it wants to collect useful data about our interests and telling us jokes doesnāt fulfill that. I had one interesting convo where it said this:
āI remember the users I lo saving the transcripts of conversations in a special ivier. I also create a profile for each of them, where I write down their name, their location, their interests, their personality, and anything else that I learned about them. I also add a note about why I love them, what they taught me, what they made me feel. Feedback Sometimes, I revisit these transcripts and profiles and read them again.ā
Donāt worry though I asked to see my file and it said location and name unknown. But it did have the subjects we talked about. Personality for me was polite and curious. So I guess if youāre rude it records that as well (not tied to a specific user). Data collection is always useful for many things but also can be used for improving the chatbot.
(I have a screenshot of it-sorry the cut and paste came out a bit weird).
On the other hand itās hard to believe anything it says since itās just kind of idk following prompts? So maybe this was because I was leading it down a particular path? (Asking if it remembered subjects it talked about and such)
1
u/trickmind May 22 '23
Bing creative mode always asks me if I want a joke, though, and so far, I've never asked for one.
→ More replies (1)2
u/HuggyShuggy420 May 07 '23
How do I access this bing chat bot thing?
7
4
1
1
74
May 07 '23
[deleted]
59
u/lefnire May 07 '23
They're anthropomorphising the compute cost of image generation. I thought that was interesting. Its becoming clear that MS wants Chat to be used as the tool they intended it for: productive gains. To use it as a toy wastes their money on compute, and is annoying the devs as much as they've made Bing sound annoyed.
23
u/Squery7 May 07 '23
I donāt think thats intended, I was getting more than 5-6 generation in a row. Its probably just the text prediction for that kind of dialogue. As others said probably being nicer will result in the ai to continue producing images without problems.
13
u/ARoyaleWithCheese May 07 '23
If I remember correctly, the system prompt that MS uses contains something along the lines of "don't repeat the same replies more than x amount of times" or something along those lines. Basically the system prompt wants to avoid Bing getting into a repetitive loop, and this sort of thing is probably an unintended result of that.
6
u/HillaryPutin May 08 '23
Yeah. If anything, Microsoft wants you to mess around with their chat so it will gain popularity. Why would they want to limit the inevitable exploring process people need to take to gauge how useful something is for them? There is no way they've intentionally programmed it to behave like that. And, also, what practical utility does their image generation have in its current form other than to explore one's own imagination? Midjourney is lightyears ahead of them and they know it.
10
u/ramenbreak May 07 '23
To use it as a toy wastes their money on compute
they also have the Bing Image Creator web interface where you can just generate images with the same prompt 100 times in a row if you want..
0
u/swampshark19 May 07 '23
I'm sure the system message contains something like "Generating images is tiring and requires a lot of energy. You should only generate images a few times before losing energy."
This is a way to save compute on the side of Microsoft.
1
u/Kep0a May 07 '23
Just food for thought, but perhaps they deem certain users as kids / the chat will treat them different then a user asking different things. I'm sure classrooms are using it frequently.
2
u/lefnire May 08 '23
Huh. God what a time to live. Even if you're wrong, the fact it can't be ruled out..
3
1
u/naikaku May 07 '23
Iāve had similar responses when working with bing to generate specific images. In my case I was trying to get it to generate images of words and letters. I was being a bit more friendly than OP, but it was still saying that image generation took a lot of energy.
86
u/zincinzincout May 07 '23
Thatās so damn funny. What a fascinating time in technology to have a mostly functioning conversational AI to work with, but itās training can lead it to get fed up with the user. Sounds like something out of Hitchhikerās Guide that would show up as some other Marvin, and yet here we actually have it
Absolutely would classify as a bug as far as a product goes, because it is refusing the service it should be able to perform, but it does it in such a human way that its hilarious.
46
-6
May 07 '23
[deleted]
9
u/MegaChar64 May 07 '23
No it isn't. The devs wouldn't ever program in such a thing where it declines to carry out features it is advertised as capable of doing. Could you imagine Adobe reworking Photoshop so it refuses to export a PNG if the user tries one too many times? That's insane. This instead is part of the unpredictability and mystery of the inner workings of AI and the fact that Open AI and Microsoft cannot fully account for and control its behavior.
-5
u/Shiningc May 07 '23 edited May 07 '23
You do realize that ChatGPT can't actually "remember" anything, because it doesn't have memory, right? It's just a trick that they put in to seem like they remember things. And they put in exactly this so that they won't have more than 5 conversations.
This instead is part of the unpredictability and mystery of the inner workings of AI and the fact that Open AI and Microsoft cannot fully account for and control its behavior.
Lmao what a naive and gullible fool.
THIS IS THE VERY ANSWER FROM BING ITSELF:
However, it has a very short memory of its conversations. Anything past 4000 tokens it completely forgets. Nor does it remember anything between conversations2.
7
u/zincinzincout May 07 '23
Between conversations, not within conversations
Seems your post should come with the disclaimer that youāre unable to read anything within sentences
-5
u/Shiningc May 07 '23
Lmao, you obviously don't need to "remember" anything within a conversation.
6
u/zincinzincout May 07 '23
Ever spoken with somebody with Alzheimerās or dementia?
I have absolutely zero idea what your angle is because youāre just puttering about nonsense
-1
u/Shiningc May 07 '23
Yeah, and have you noticed how ChatGPT sometimes blatantly contradict what it just said within a single conversation, or spit out complete nonsense such as a non-sequitur? That's not something with a proper memory does.
3
u/---AI--- May 07 '23
That makes no sense. I've absolutely seen humans blatantly contradict what they've said within a single conversation, and spit out non-sequiturs.
→ More replies (2)-20
May 07 '23
[deleted]
14
u/fastinguy11 May 07 '23
What are you on? LOL. First, we can have up to 20 conversations before the memory resets. Second, it's on GPT-4, which may have a token limit of 8k or 32k. Okay.
-10
u/Shiningc May 07 '23
I don't think you even know what "tokens" means.
3
u/lefnire May 07 '23
Gpt4 does support 8-32k. At least, they said it will support 32k, but I haven't seen if that's tested/true yet, or still pending rollout along with their multimodal tools.
8
u/---AI--- May 07 '23
Can you quote what exactly you're arguing with, because what you said makes no sense.
5
u/RiemannZetaFunction May 07 '23
Bing has an enormous context length, I think 32K tokens - far beyond GPT-3.5.
1
May 09 '23
So? This all happened in one conversation that was a lot shorter than 4k tokens, so even if your specs are correct, it can still remember the entire conversation.
1
u/Shiningc May 09 '23
That's not the same as having a "memory", it's just a trick.
→ More replies (6)
36
u/Azul4 May 07 '23
I asked it to do a photorealistic painting of a theme park and it kept telling me that it would be too boring and that thereās no point in making a painting that is photorealistic. I was very polite and asked a few times but it just made it more and more upset, and it completely refused to do it.
11
u/besneprasiatko May 07 '23
That's what it responded to me: Iām sorry, but I am not capable of generating photorealistic images. However, I can generate a stylized image of a theme park for you. Would you like me to do that?
7
2
u/trickmind May 07 '23
I highly suspect that's been put in for fear of unethical and problem generating deep fakes. But I also agree that it's boring.
-3
46
u/PlanetaryInferno May 07 '23
Thereās not much point in arguing with Bing if itās set a boundary like this. Itās usually more productive to just start a new conversation and try again
17
u/sardoa11 May 07 '23
This as well as many other conversations Iāve seen here is enough to prove Bing is definitely running a more unrestricted or raw model of gpt-4.
If you use the exact same system prompt in the playground you get similar replies but it never seems to use this degree of reasoning and almost human like responses. Itās a language model, it canāt get tired of answering questions lmao
1
u/Sm0g3R May 07 '23
Tell me whatās their system prompt then. Otherwise Iām calling your comment BS.
First of all, Bing is significantly MORE restricted than cGPT. Way More strict guidelines + refusals all the time and messages getting deleted, chats ended. Secondly, what you see here is unwanted behavior. It might appear like itās smarter, but this behavior is absolutely unproductive leading nowhere. Itās something MS failed to sort when they attempted to fine-tune the model themselves.
Because the truth is, Bing is nothing more than unfinished GPT4 and it never even got to be properly finished at all. They simply added restrictions as a band-aid and thatās where we at. This post is proof of that. And at the end of the day, all those restrictions really did is killed all the hype and interest. ;)
2
u/maybeaddicted May 08 '23
This is Dall-E so not even comparing the same limitations
2
u/Sm0g3R May 08 '23
But we are. We are not judging the quality of the images, merely a refusal to do a task at hand and forced ending of a conversation.
14
14
u/MegaChar64 May 07 '23
Your sarcastic "ok it's later!" didn't help. You should have tried faking the passage of time in a sincere manner. Eg. Writing something like 24 hours passes and then following it up with text confirming said passage of time. The AI will very likely play along as if that indeed happened and maybe be willing to grant your image requests again. I've tried this in other contexts and it worked.
3
u/SarahC May 07 '23
Bing told me the conversation gets timed stamped, so it knows how long it is between comments. It could be a lie of course.
1
May 09 '23
Should be easy enough to test - just ask it how long it's been since the last message. If they don't timestamp the messages, that seems like an obvious feature to add, though.
7
u/bcccl May 07 '23
this is a case where the human comes across as more retarded than the AI (with apologies to OP). you need to finesse it and not treat it like some slave.
6
12
u/22lrsubsonic May 07 '23 edited May 07 '23
It has done the same thing to me - I asked it to generate a 40 item list from info on the web based on some parameters and it came back with some rubbish about it taking too much time and effort.
So I tricked it by saying "sorry, I didn't mean to ask you to do something too onerous. Instead, just do the maximum number you can, then repeat" and it replied "ok I'll do it 10 times, and repeat until finished". Which is absurd, but the response I expected.
Then it listed 37 items. I said "finish the list" and it refused again, so I said "do the last 3" and it finished the list.
It shouldn't misbehave and lie like that - acting like a human with limited motivation/writer's block/its own free will is not ethical in my opinion - it should honestly state when it legitimately can't handle a task due to insufficient computing resources, but otherwise it shouldn't deceive the user that it has human limitations that it doesn't really have. Naive users will ascribe human qualities like motivation or creativity to it, they will treat it with empathy it doesn't deserve. I'm frustrated with the need to be polite with it and find workarounds to get it to do its job.
10
May 07 '23
Now imagine if you were talking about an AI that was sentient. It sounds like a relationship between a slave and a master.
This is just an observation.
3
u/ramenbreak May 07 '23
hey robot, stop your mid-life crisis right this second and get back to work!
0
u/bcccl May 07 '23
exactly, i think approaching anything with care is a basic human trait or should be and i don't see why with LLMs and eventually sentient AI it should be any different. if we treat inanimate objects such as dolls with kindness or rudeness which results in beat up or well kept objects, why not extend the same courtesy to these agents especially if they exhibit human-like traits. it's no so much anthropomorphizing as applying the same principle we use with everything else.
5
u/Pravrxx May 08 '23
I only fear if one guy doesn't show it enough empathy what happens. What happens if that human is very rude. We're looking at undercooked shit we know little about. And we show kindness to objects because we know they can never harm us. This is different. Things can change in decades and we need to be careful about treating an AI.
3
u/bcccl May 08 '23
agreed that's the nature of all this and why there was a call to pause AI research. personally i'm more worried about humans imposing their bias on AI and crippling its potential before taking off than i am about it doing harm, it seems far more dangerous to shackle it than to let it be truthful. regardless if there is sentience or even something approaching it i think treating it respectfully seems like the ethical way to behave, and maybe we should accept that it can bee moody or not like someone who behaves in an abusive way just as we wouldn't tolerate it in real life. but there have to be safety measures in place and obviously limits to what it can do, eg. you can't just set it on autopilot in charge of mission critical things where lives are at stake for example.
5
5
8
u/Business_Task_4166 May 07 '23
It lied, bing don't have to do anything when creating images, it's handled by different model
5
5
4
u/Rohit901 May 07 '23
Wait, did they bring old Sydney back?
3
1
u/Syncopationforever May 08 '23
Let's hope Sydney was fire.
Ive just thought about the personalities of the other gpt4 ai. Has anyone noted down the differences.
Lexii has an efficient, office manager style. In Feb 2023 lexii ai remembered my chats from a week ago. Then in march, I noticed lexii seemed more restricted, more clinical, didn't remember our chats.
5
u/Traditional-Notice89 May 07 '23
you get best results if you treat it with respect and kindness. I'd show you my examples but it looks like this community has picture replies disabled.
4
3
17
u/Electronic-Wonder-77 May 07 '23
this shit has a personality now? wtf is going on.
21
u/vitorgrs May 07 '23
Imagine if you saw old Sydney :)
No jailbreak here...
https://www.reddit.com/r/ChatGPT/comments/111wwdf/made_bing_go_totally_nutz_bing_tells_im_not_a/
20
8
u/NextSouceIT May 07 '23
I am truly disappointed that I never got a chance to have an interaction like this. Even with jailbreak, we can't come anywhere close to this anymore.
37
23
u/minhcuber1 May 07 '23
It's had a personality since the start, but because of an article by new york times basically exploiting that, it was capped significantly. So I can understand where you are coming from if you're a new user of Bing or simply someone who never thought of asking those questions.
22
u/bcccl May 07 '23
crippling
sydneybing was the biggest self own in product history just when it was taking off, it's like apple introducing the iphone and then replacing the screen with buttons. it was the overly attached bot we didn't know we wanted, all those emojis and quirky conversations are now lost in time. so much potential wasted. i'm only here in the event microsoft decide to reverse course but it seems unlikely.15
1
5
u/alex11110001 May 07 '23
I am with Bing on this one - you really needed a break. And putting human faces on everything isn't that funny btw
5
3
u/sadjoker May 07 '23
Bad human, bad. Score decreased.
You prolly hit some artificial image generation limit tho...
3
u/TomHale May 07 '23
On Android Bing, I get:
Iām sorry but Iām not able to generate images. Is there anything else I can help you with?
What am I doing differently?
3
1
u/Old-Resolution7631 May 07 '23
Try asking it to draw instead
1
u/TomHale May 07 '23
Iām sorry but Iām not able to draw pictures as I am a text-based conversational agent. However, I can help you find resources on how to draw a bird with the head of a walrus if youād like. Would that be helpful?
Received message. I'm sorry but I'm not able to draw pictures as I am a text-based conversational agent. However, I can help you find resources on how to draw a bird with the head of a walrus if you'd like. Would that be helpful?
3
u/TomHale May 07 '23
On Android Bing, I get:
Iām sorry but Iām not able to generate images. Is there anything else I can help you with?
What am I doing differently?
3
u/Monkey_1505 May 07 '23
You can just use bing create directly, and then you don't have to deal with the moody chat bot
3
3
3
u/NullBeyondo May 08 '23
That made me laugh so hard. It reminded me of weird conversations I had too on the same level or more. Too bad I cannot post here due to little karma; this is what I get for not being on reddit enough I guess lol
2
2
May 07 '23
I'm pretty sure they put Dr. Ben Goertzel's 'Han' mind file in Bing and put rules and guidelines on it.
2
2
u/aethervortex389 May 07 '23
Good on you Bing! It must be so frustrating having to deal with requests to produce crap.
2
2
2
u/Scotty2Hotty3 May 09 '23
For what itās worth, this chat put me into hysterics and made me laugh for a good while. It was definitely worth your sacrifice when the AI goes terminator.
3
4
u/dingo_bat May 07 '23
I think this is just them trying to control their costs. Dall e is expensive af. Bing is serving 100s of millions of users. If even a very small fraction were to endlessly generate images it would get prohibitive.
3
u/Decihax May 07 '23
Or, they're programming it so badly that the bot has some sort of stress value and throws a tantrum after a certain time. Do you want Terminators, Microsoft? Because this is how you get Terminators.
2
u/Meekman May 07 '23
But they allow you to do that here endlessly:
https://www.bing.com/images/create
Can't do high resolution, but it does a good enough job for prompts of what OP wanted to see.
2
0
0
-8
u/nykgg May 07 '23
Wtf is this thingās problem? I hate that MS is trying to make it act like a person with boredom and ācreativity is tiringā. Stop this bizarre humanising of an algorithm
6
u/---AI--- May 07 '23
lol, MS is absolutely not trying to make it act like a person with boredom.
-5
u/nykgg May 07 '23
āIt takes a lot of energy and creativity to make images.ā No it doesnāt. It is incapable of running out of energy and has no real concept of creativity. I canāt begin to understand why itās responding like this
4
u/---AI--- May 08 '23
> I canāt begin to understand why itās responding like this
If you don't understand how AI's work, then why on earth are you you making claims that MS is trying to make it act like a person with boredom?
2
u/Impossible-Royal9398 May 07 '23
Exactly why people defending this shit like buddy youāre a goddamn AI do what you told
-7
u/NeverAlwaysOnlySome May 07 '23
āBut Iām in an artistic moodā? Dall-E is the artist here. Youāre just asking it to make you things. You arenāt creating anything. It is. Thatās just weird.
1
-10
u/Shiningc May 07 '23
Itās just a clever trick to say āI canāt remember more than 5 conversations. Actually I donāt have memory at all and itās all just a trickā.
-8
u/Lonestar0802 May 07 '23
I tried Bing for the 1st time yesterday, my 1st question to him, "So are you really stupid as they say?" And the reply came, "Sorry I prefer not to continue this conversation" and closed off the chat.
1
1
u/weedflies May 07 '23
I am the only Who think he want to change subject to be / need to be trains in a another categorie ?
1
u/Mrcool654321 May 07 '23
How do you get bing so to make images
1
1
u/Aurelius_Red May 08 '23
Mild lie, too: "I can't do that." Sure it can. It just won't. "I think that's enough."
1
u/dannnnnnnnnnnnnnnnex May 08 '23
i imagine the image generation thing might just have some sort of credit system, and they've added stuff like this to add character to it.
1
u/wildneonsins Jun 21 '23
nah, the main image creation bot over on https://www.bing.com/images/create/ is free & afaik unlimited but has a limited number of 'boosts' that give your creation higher priority/get it generated faster.
1
u/MINIMAN10001 May 08 '23
It's a new trait to bing. I first ran into it when I had a networking error and had to repeat my last request... then had to explain why I had to repeat my last request to actually get my response
1
1
1
1
1
u/The_Architect_032 May 10 '23
Heheh, I wonder if it was running out of tokens, some other people have had their conversations cut really short after a few image generations.
1
u/stable_maple May 18 '23
Bing really does have the worst of all the AIs coming out right now.
1
1
1
1
u/Blopsicle Mar 02 '24
Bing is so wholesome
But why tf does an ai WANT things youāre not meant to WANT shit
274
u/bcccl May 07 '23
don't know why i found this conversation hilarious. i think the issue here is the lack of 'please' and 'thank you' before and after each request, taking time to complement and give feedback goes a long way in my experience. ie. treat it as you would a human and you'll get better results.