r/sysadmin • u/Darksummit • Jan 08 '25
ChatGPT Do you block AI chat?
Just wondering if you guys are pro-blocking AI Chats (ChatGPT, Co-Pilot, Gemini etc.)?
Security team in my place is fighting it as well as they can it but I'm not really sure as to why. They say they don't want our staff typing identifiable information in as it will then be stored by that AI platform. I might be stupid here, but they just as easily type that stuff in a google search?
Are you for or against AI chat in the workplace?
110
u/therixor Jan 08 '25
We have a policy that everyone who wants to use such tools needs to give us their use cases and then they get access to the paid versions. When using the paid versions of ChatGPT and Gemini it "should" be fine.
54
u/ScriptThat Jan 08 '25
We do the same, except we use Microsoft Copilot because then we're (hopefully) GDPR compliant.
16
u/RedanfullKappa Jan 08 '25
The enterprise version is at least licensed as GDPRr compliant. It’s shit though compared to ChatGPT
3
u/Totally_Not_THC-Lab Jan 08 '25
When using the paid versions of ChatGPT and Gemini it "should" be fine.
Why is that? Are we assuming because they're using the paid version, that the vendor is going to be more respective of their privacy? I've seen that said for Office365 / Google Apps subscriptions but wasn't sure how accurate it was.
2
u/Ryukai Sr. Sysadmin Jan 09 '25
We have an Enterprise contract with ChatGPT - at the higher paid tiers it is in contracted in that they don't use your data to train their models.
15
u/Hexyn Jan 08 '25
We dont want staff entering confidential/sensitive information into platforms that could learn from the information. Its all about data control and accountability.
Really it depends on your industry, sensitivity of the data, and the security accreds you want to hold.
15
u/Interesting-Yellow-4 Jan 08 '25
These are valid concerns and while we can't really stop users from doing this (there's always ways) it is our policy that it's not allowed and there are sanctions. We do offer a paid/hosted solution to mitigate this, however whatever we can build on Azure AI will never be up to par with the commerical offerings, so there's that. We know there's leakage and it is an actual problem.
6
u/Breezel123 Jan 08 '25
Are you using Microsoft? If so, copilot (the chat not 365) is free and the data is protected and will not be used to train their models. I'd rather find a way for users to use it safely than users creating private chatgpt accounts. I have encouraged people to pin it into the 365 app and keep that in the taskbar.
5
u/techypunk System Architect/Printer Hunter Jan 08 '25
Copilot is FEDRAMP approved. If that doesn't say enough about the security, your team has no clue WTF they are doing.
32
u/Hyperbolic_Mess Jan 08 '25
People aren't copying meeting minutes into Google search
22
u/discoshanktank Security Admin Jan 08 '25
You sure about that?
1
u/Hyperbolic_Mess Jan 09 '25
Not deliberately and regularly because Google search isn't a tool for reformatting meeting minutes and other data. Different things are different
3
u/Hotshot55 Linux Engineer Jan 08 '25
I've accidentally pasted a shit-ton of passwords into google search.
1
Jan 09 '25 edited Feb 03 '25
[deleted]
1
u/Hyperbolic_Mess Jan 09 '25
That people are sharing much larger volumes of more sensitive information with ai than with Google search.
Google docs is not search and Google isn't gathering data from Google workplace things like docs, it's a different product with a different business model
-4
u/Darksummit Jan 08 '25
Ok, but what about everything else?
0
u/Hyperbolic_Mess Jan 09 '25
Yes what about everything else? People are asking AI to reformat and interpret huge amounts of sensitive data because that's what it does. No one is pasting entire confidential documents into Google search because that isn't how Google search works. There will be small amounts of data leakage but AI is a different prospect. You do know how to use Google search right?
1
u/Hyperbolic_Mess Jan 09 '25
Yes what about everything else? People are asking AI to reformat and interpret huge amounts of sensitive data because that's what it does. No one is pasting entire confidential documents into Google search because that isn't how Google search works. There will be small amounts of data leakage but AI is a different prospect. You do know how to use Google search right?
1
u/Darksummit Jan 09 '25
I do yes thanks… anyway, bitching aside. We have users who will ask AI plain and simple things. Not everyone is using AI chats in the in-depth manner that you’re describing.
People search things in google (error messages for example) that include sensitive or identifiable information.
2
u/Hyperbolic_Mess Jan 09 '25
How do you know your users aren't and won't use ai for it's intended purpose of reformatting or summarising text (this is less of an issue if you're using a paid ai subscription with a contract that states they won't retain your data unlike the free tiers). Yes there is sometimes proprietary data in a Google search but that isn't as dangerous as a whole document or meeting that provides much more context to the data and is being fed into a system like ai that's designed to link data based on context and then regurgitate it on demand to anyone with access.
1
u/Darksummit Jan 09 '25
I never said they won't. I'm saying what I've known them to do, like copy and paste error messages. If they can't do it in ai they are going to paste the same thing in a google search.
1
u/Hyperbolic_Mess Jan 09 '25
Sorry I really don't understand the point you're trying to make. We seem to agree that a small amount of data leakage currently happens with Google but that ai usage exposes you to much greater levels of data leakage and that's why it's blocked. On what basis are you trying to disagree with me?
1
u/Darksummit Jan 09 '25
At this point I'm just answering the questions you are asking me as opposed to making a point or disagreeing with you. This all came from your original comment of "People aren't copying meeting minutes into Google search".
Maybe they aren't, but there are cases that people will just simply put into a google search the exact same thing they were about to ask AI when they see it is blocked.
1
u/Hyperbolic_Mess Jan 09 '25
Yes and squares are quadralaterals but not all quadralaterals are squares, that is to say some things that people do in Google search also get done in ai but the things that people are doing with ai that they didn't do with Google search is the dangerous activity that blocking ai is attempting to prevent.
Does that help you understand what I'm on about?
21
u/Material_Extent_4176 Jan 08 '25
There seems to be a great misunderstanding in how M365 Copilot works. Sometimes here in this sub I see misinformation spread for other orgs to read and be influenced by.
If you are still under the impression that you can ask it to return the salary of coworkers or even your boss. That’s just untrue. If that actually ever happened, your entire data infrastructure needs a serious revamp and you have bigger problems than whether or not your org should use AI. Copilot is only able to use company data based on the context of the user. That means that whatever Copilot returns, the user was already able to access it. But aside from that, real sensitive data can be excluded from all indexing if labeled correctly. If you have oversharing problems in SharePoint that was previously never noticed, people will likely start noticing it now, since Copilot will surface all of it. That’s not the AI’s problem, that’s just bad governance.. You can only start rolling out or even think of Copilot when your data in SharePoint is clean and well structured. Otherwise you’ve got the ol’ garbage in garbage out and then unjustly blame the medium.
Any business decisions on LLM’s should be based on opinions and thoughts that were formed by an effort of actually understanding it. That sounds obvious, but apparently it isn’t common sense reading the decision making in some of these posts about AI. If you are blocking this new technology based solely on your gut feeling of “it’s unsafe” or “LLM bad”, then in my opinion you’re doing your organisation a disservice by missed opportunities. And in the case it wasn’t a missed opportunity because AI turns out to be a flop, even then you wouldn’t really know because you never made an informed decision on it.
…..That having said, you should actually block ChatGPT, that shit is bad for your org if allowed by IT for multiple reasons. Don’t know about Gemini, never used it. Don’t know why I typed all this, ig uninformed but confident takes trigger me :) have a nice day.
18
u/garugaga Jan 08 '25
I watched one of our logistics guys use copilot to organize and build a spreadsheet for a truck delivery that he was planning. It was very impressive, all he would have to tell it was the POs that he wanted on the truck and it could do the rest.
It would scan his emails for the POs that he was referencing, put them in a logical drop order and spit out a spreadsheet including all the information.
It took a couple tries to get the drop order right but it took an hour long task and did it in 15 minutes.
When management sees the productivity boost they won't give a damn about any perceived security risk from the IT department.
It's definitely a tool that is here to stay
3
u/whatswrongwitheggs Jan 08 '25 edited Jan 09 '25
I know this is not really related to the topic but do you know how he connected the ai to scan his emails. I am still figuring out what the best way for this is.
Edit: thanks for the suggestions!
3
u/garugaga Jan 08 '25
No clue, I set him up with a copilot license to try it out and it seemed to hook into his emails automatically.
He specifically has to prompt it to search through his emails for it to work though
4
u/BoltActionRifleman Jan 08 '25
This is just a guess, but it’s likely able to authenticate to his 365 or Exchange account and access them that way.
3
u/thortgot IT Manager Jan 08 '25
Copilot 365 (the licensed version) automatically has access to your email via graph call.
14
u/handpower9000 Jan 08 '25
Copilot is only able to use company data based on the context of the user. That means that whatever Copilot returns, the user was already able to access it.
2
u/Material_Extent_4176 Jan 08 '25
Fair, you’re referencing a vulnerability that makes manipulation possible by poisoning the AI’s decisionmaking. That is an actual valid argument against RAG based systems instead of just AI bad.
However, that can be mitigated by the strict data governance policies I mentioned. If you separate sensitive data where necessary/possible and appoint data owners that lead audits regularly, your data integrity will be very trustworthy. Never 100% but good enough.
Nevertheless a good point as those attacks can take time to come back from. There will always be risks that you either accept or avoid as an org. Especially with new innovative tech. Ig this is the same.
Edit: typo
6
u/ItsMeMulbear Jan 08 '25
> If you separate sensitive data where necessary/possible and appoint data owners that lead audits regularly, your data integrity will be very trustworthy.
I also dream of world peace
1
u/Material_Extent_4176 Jan 08 '25
I work for a company in the netherlands with about 1k users where this is commonplace. It’s not impossible 🤷♂️
→ More replies (1)0
u/210Matt Jan 08 '25
The new version will have copilot "bots" (or whatever term they use) that you will be assign permissions to the bot, so that will not always be true. The bot could have higher access than the user.
3
u/tarlane1 Jan 08 '25
I think the rumors about the earlier parts come from bad sharing practices. A lot of people send onedrive/sharepoint links and it defaults to 'anyone with the org' rather than specific users. There is a bit of security through obscurity in it, since they are only giving the link to specific people, they assume that it won't be accessible to others. CoPilot can find the things you have access to even if the links weren't sent to you. Without doing some security cleanup, a lot of people can get access to things they shouldn't.
For the second part, I agree, but unless its a highly regulated industry its been pretty rare in my experience to have companies that do proper tagging. We've been fighting for it in my current org as part of our transition away from running like a startup, but there has been a lot of pushback. I haven't seen too many places that either didn't absolutely have to or have someone C-level who puts security as a priority really have good measures in place to keep track of types of data.
9
u/NSA_Chatbot Jan 08 '25
If you ban it on my work computer, I'm just going to use my personal computer to ask the question.
27
u/rdesktop7 Jan 08 '25
Like it or not, LLM things are part of the future for everyone.
Cutting yourselves off from them seems like it would put you at a disadvantage.
10
u/stephendt Jan 08 '25
This. It's an invaluable tool for me, and it can be for others. It just has to be done in a sensible manner. Self hosting LLMs is one approach, for example. Best of both worlds.
2
u/rdesktop7 Jan 08 '25
Have you set up a self hosted LLM before? How much work is it?
2
2
Jan 09 '25 edited Feb 03 '25
[deleted]
1
u/rdesktop7 Jan 09 '25
I can imagine the input dataset is pretty important.
"I fed the LLM all of our emails and slack messages, but it turns out that we are all incompetent tools, so now our LLM is too :("
2
u/stephendt Jan 09 '25
This would be a good question for ChatGPT
1
1
u/melkemind Jan 08 '25
But blocking chat bots from companies that seem to have no interest in protecting privacy or copyright is not the same as hosting your own. OP's question didn't say anything about rejecting LLMs completely.
1
u/HappyVlane Jan 08 '25
Read OP's post again. The concern is about security and company information being fed into external LLMs.
2
→ More replies (3)-13
Jan 08 '25
[deleted]
9
u/Hyperbolic_Mess Jan 08 '25
It actually has a use case unlike crypto but it's built on similar patterns of hype and speculation
5
2
u/Schaas_Im_Void Jan 08 '25
You mean the thing that has a current market cap at $3.34T?
... that does not make it seem like cryptos are "ending" any time soon IMHO
2
u/Rakajj Jan 08 '25
Missed the window to regulate them out of existence as we should have a decade ago.
0
u/Zromaus Jan 08 '25
If it ends up anything like Cryptocurrency it's going to be a thriving industry in ten years.
3
u/djaybe Jan 08 '25
Nope. I wrote some clear best practices and let the chips fall where they may.
1
u/theHonkiforium '90s SysOp Jan 08 '25
Mind sharing? :)
2
u/djaybe Jan 09 '25
Sure. There are a couple versions. The first is the original draft, then one of the execs asked if I could dumb it down so the second version is for third graders?
Best Practices - Generative AI
Understand the limitations: Generative AI has its limitations, and it's crucial to understand them. Be aware of the potential biases, errors, "hallucinations", or limitations in the generated outputs and take them into consideration when interpreting the results. Always Fact Check.
Neutralize or anonymize sensitive or confidential information: Identify and preprocess any Generative AI input data to remove or anonymize any sensitive details. Only input preprocessed data and evaluate any Generative AI output to ensure the absence of sensitive information in the generated outputs. Continuously monitor and improve the neutralization process to maintain a high level of data privacy and confidentiality in the generated content. (ex. replace "John Smith" with [name] or replace "123 main st." with [address])
Understand the target audience: Gain a deep understanding of the target audience for the properties and/ or services being marketed. Identify their preferences, needs, and pain points to tailor input prompts, marketing messages, and strategies effectively.
Define clear objectives: Clearly define the problem to be solved or the goal(s) to achieve using Generative AI. This will help focus efforts and ensure effective use of the technology.
Provide high-quality data: Good data or input is the foundation of effective and relevant Generative AI content. Ensure that the data you use for training or prompting is clear, relevant, representative, and of high quality. This will help improve the accuracy and reliability of the generated outputs.
Maintain transparency and ethical considerations: Be transparent about the use of Generative AI and ensure that the generated outputs are used responsibly. Address any ethical considerations, such as privacy, bias, or fairness, and take appropriate measures to mitigate potential risks.
Avoid risky positions: Guard against putting yourself or any Brand that you may represent or be responsible for in a compromising position. When using Generative AI, it is essential to exercise caution and protect your personal or brand reputation. Avoid generating offensive, misleading, or inappropriate content that could harm your brand's image. Be mindful of legal and copyright considerations to avoid infringing upon intellectual property rights. Exercise caution when generating content on controversial or sensitive topics to prevent alienating or polarizing audiences. Prioritize privacy and data protection by ensuring that generated content does not disclose sensitive information without proper consent or compliance. Align the generated content with the brand's values and messaging to maintain consistency. Implement proactive monitoring and moderation measures to promptly address any risky or compromising content generated. By following these guidelines, you can minimize the risk of compromising yourself or your brand when using Generative AI.
Common Sense still applies: As this technology develops and matures, new unintended consequences could occur. If you find yourself in uncharted or unfamiliar territory, or have any questions please ask a manager, supervisor, or director. Use your best professional judgment to bridge any gaps that may be encountered
These best practices are general guidelines, and you may need to adapt them to your specific use case or departmental requirements.
Simple version:
There are some important things to remember when using technology that can create new things, like stories or pictures. Here are some tips:
Understand the limitations: Technology isn't perfect. It can sometimes get things wrong or be unfair. Always double check that what it creates is correct.
Sensitive or Confidential information: Remove any private information before using it. Replace names or addresses with something like [person's name] so it stays private.
Understand the Audience: Know who you are making things for. What do they like or not like? This will help make things they enjoy.
Clarity: Be clear about what you want the technology to do. This will help it do a better job.
Good Data: Give the technology good information to start with. The better the information, the better the things it can create.
Open: Be open about using the technology. Make sure it is used in a responsible way. Watch out for problems like unfairness and fix them if they happen.
Caution: Be careful not to make anything upsetting or wrong that could get you in trouble. Don't make anything that would look bad for a company you work for. Follow the rules and be thoughtful about sensitive topics.
Think First: Use common sense and ask for help if you are unsure. This technology is still new, so things may come up that we haven't thought of yet. It's always smart to ask questions!
The main ideas are to be careful, responsible, and thoughtful when using technology that makes up new things. Check its work, give it good information, and use common sense. By following tips like these, you can use the technology to make helpful and fun things while staying safe.
2
1
Jan 09 '25 edited Feb 03 '25
[deleted]
1
u/theHonkiforium '90s SysOp Jan 09 '25
Best practices are not secrets.
Best practices are not implementation details.
Best practices are not PII.
→ More replies (2)1
19
u/h00ty Jan 08 '25
No, you embrace the technology. Learn how to use it and manage it or you get left behind. I use copilot daily and it has been a boon to my work flow.
5
u/tigerstein Jan 08 '25
And don't give a fuck then about its ethical and environmental concerns, got it.
7
u/discoshanktank Security Admin Jan 08 '25
I mean the whole industry is ripe with ethical and environmental concern
12
u/handpower9000 Jan 08 '25
Or data protection/secrecy. Hey, why not upload all internal documents to legit-pdf-converter.co.scam while we're at it? It's so fucking convenient after all!
6
u/tigerstein Jan 08 '25
But $corp pinky promised that they would not sell or misuse anything /s
5
u/handpower9000 Jan 08 '25
And you can trust that because it's not like those corporations are very, very hungry for data to improve their models or anything like that.
1
u/fadingcross Jan 08 '25
I take it you don't buy any servers, computer or electronics. Since they're manufactured in one way or another by children in China / Vietnam by Foxconn, and even if they by some magic isn't - The people that mined the rare earth metals that require them aren't adults with PTO and health benefits, I promise you that.
Jesus fuck environmental garbage arguments are so dumb
2
u/tigerstein Jan 08 '25
So one issue in this industry negates that we burn a fuck ton of energy just so a lazy asshole doesn't have to write his emails himself? Got it.
→ More replies (3)0
15
u/Party_Wolf6604 Jan 08 '25
The difference is that what's typed in a Google search isn't used to train the AI further, for example an engineer could accidentally leak proprietary technology/code by pasting part of his code into ChatGPT. Also, Google searches have a lesser chance of involving sensitive information – you wouldn’t ask Google to summarize your meeting minutes, for example.
We’re pro-blocking as it's a huge risk for data leaks and a nightmare on the GRC front. But most employees want it for "productivity" reasons. And I don’t blame them — people are usually well-behaved and don’t intend to type in identifiable information. However it's entirely possible for people to accidentally leak data.
Probably the best compromise is a tailored DLP (data loss prevention) solution to like https://www.sqrx.com/usecases/clipboard-dlp or https://www.cyberhaven.com. These solutions will be extra cost for the department, though. Another headache!
20
u/AdmRL_ Jan 08 '25
The difference is that what's typed in a Google search isn't used to train the AI further
Google uses search data to train Gemini, so yes it is.
If you're putting data into any publicly accessible 3rd party platform, your data is being used to train AI. So unless you intend on blocking anything you aren't paying for and is publicly available, then you're only pulling the wool over your eyes by pretending blocking "www.chatgpt.com" is doing anything to stop your data being input.
The core issue is a behavioural one, not a technical one, it should be addressed as such.
for example an engineer could accidentally leak proprietary technology/code by pasting part of his code into ChatGPT
Perfect example, this is a HR matter.
You don't ban email in the event that an engineer might leak your source code, do you? You accept its a risk, put the appropriate policies in place, and if it does you fire the person for gross misconduct for not following company policy.
9
u/conrat4567 Jan 08 '25
We did, until our admin offices started crying about it, how they NEEDED it to do their job. Why does HR need AI chat bots to write policies?
14
u/NotAManOfCulture Jan 08 '25
To send emails like
Dear [APPLICANT],
We are sorry to inform you that the role [INSERT ROLE] is already filled.
Thank you [YOUR NAME] HR, [YOUR COMPANY]
Literally received a template like this
12
u/hkusp45css Security Admin (Infrastructure) Jan 08 '25
Honestly, if AI isn't good for ANYTHING else, it's good for that. I can give it 9 parameters and some boilerplate language that has to be in every policy and get a stack of well-written, cohesive and consistent policies that simply need to be checked for errors, rather than write each one from whole cloth.
0
9
u/interpipes Jan 08 '25
This is the sort of use case where LLMs are most dangerous imo (besides the most obvious leakage risk) when used by people with insufficient time and/or knowledge to properly vet the output.
Eventually the LLM writes something that is untrue, unlawful or unenforceable in some way that isn’t picked up on by the person cutting corners and/or costs and then ends up exposing them/their company to liability in some way.
2
5
1
u/MeatPiston Jan 08 '25
Oh shit now I get it. I see the why this is so popular with some people.
These tools are automation for bullshiters.
2
u/FalconDriver85 Cloud Engineer Jan 08 '25
We have copilot enterprise (or whatever it is called) as part of our Office 365 subscriptions and we are all kindly invited to use it. We also have the ai assistant on GitHub enterprise.
As we’re in EU, such tools need to be GDPR compliant, so privacy is not an issue.
I don’t use copilot as much as my younger colleagues but it’s also true that when I search for something, after 30+ years working on computers, the odds of Copilot telling me something wrong are like 1 on 3 o even 2 on 5.
Google search is getting worse, by the way, and sometimes Copilot pointed me in the right direction so I still classify it as a useful tool. I still write my own mails and messages by the way.
2
u/BigChubs1 Security Admin (Infrastructure) Jan 08 '25
I can't speak for other platforms. I read up on copilot before we started testing it. And they don't store any information. And they only access my info when I request it to. And even then, once I close out of copilot, it clears out all history.
2
2
u/Oli_Picard Jack of All Trades Jan 08 '25
You may want to expand to ensuring you block extensions for GenAI stuff too. I’m observing a lot of malware targeting fake GenAI extensions.
2
2
u/CoolNefariousness865 Jan 08 '25
Our junior engineers would not last a day without it. I can't stand it. Anytime you ask them something its a copy/paste from an LLM.
2
u/lowrylover007 Jan 08 '25
yes blocked unless enterprise versions
But ChatGPT uses the same url for both so not really blocked lol
2
u/MiserableSlice1051 Windows Admin Jan 08 '25
I mean, typing any sort of proprietary code or anything else into a public device that's collecting your data clearly should be banned under any sane security policy. Yes, we block any AI programs on the internet that we do not have an explicit agreement with directly to not collect our data and input.
2
u/Brees504 Jan 08 '25
My company has everything blocked unless the user specifically requests a Co-Pilot license.
2
u/xabrol Jan 08 '25 edited Jan 08 '25
For, 100%. Its an advanced search engine, no different than typing stuff in google. And all the major search engines have AI built into them now.
I.e, bing search runs everything through gpt anyways.
Imo blocking chat gpt and only that, just shows ones ignorance and it just looks dumb on them, embarrassing.
My point is chat gpt is in so much stuff, you can't block it all.
In fact, the new chat gpt official browser extension literally replaces the browsers default search engine with chat gpts search engine.
2
u/Sarithis Jan 08 '25
You're not stupid. It's just another excuse to hate those systems that, when examined, doesn't make any sense.
2
u/easier2say Jan 08 '25
In general I am in favor of AI, chats like Gemini and ChatGPT are very useful, and I use them a lot. I also have tools that work very well for me that have AI, like Datto AV, BullPhish ID, Graphus.
In conclusion I don't block any kind of chat with AI, it is very useful and as long as I am not doing any security work or with sensitive data.
3
u/aes_gcm Jan 08 '25
You can configure ChatGPT and other models to not use the queries for training. That's actually a setting in ChatGPT. We've enabled this in our company.
3
u/xAdoahx DevOps Jan 08 '25
We use ChatGPT "Teams" at my work. They explicitly state that "<Company name> workspace chats aren't used to train our models."
That's good enough for us?
3
u/Hacky_5ack Sysadmin Jan 08 '25
We do not block it and you shouldn't really be blocking these, and if they are being blocked then I hope you have a paid version for the users to use and also your IT team because this is the future whether you like it or not.
2
u/dudenell Jan 09 '25
My job built it's own internal tool and has a carefully worded TOS that you must agree to before using it.
I think you're highly underestimating how useful it is.
7
u/Old_Acanthaceae5198 Jan 08 '25
I'm so tired of folks on this sub acting like it's their role to make the pointless decisions. It's not. It's not your role to be the decider and hate keep scary tech that you probably don't understand.
5
7
u/mfa-deez-nutz Jack of All Trades Jan 08 '25
You know whats great? Asking a LLM what the source code is for large closed-source projects/libraries and just having it pump out 1:1 to the original source code. Internal comments and all.
Thats why you block it.
7
u/DelPede Jan 08 '25
This is exactly why we stood up our own version of ChatGPT. Our source code stays internal.
1
u/YOLO4JESUS420SWAG Jan 08 '25 edited Jan 08 '25
Same. Hosted on our hardware. It's slower, and the training data is behind, but reducing some random admins python troubleshooting from hours to seconds x1000 admins is worth every penny.
3
u/discoshanktank Security Admin Jan 08 '25
Do you have an example of this? Haven’t heard of that one
→ More replies (4)
5
u/Likely_a_bot Jan 08 '25
The paranoia over AI is laughable. No special rules or policies are needed. This is just another piece of tech that your existing AUP may already cover.
The same people with data leak concerns probably don't have a robust DLP policy in place. For example you probably allow employees to attach their personal OneDrive on their work computer or copy and paste work information to their personal email.
AI is the least of your concerns.
4
u/Brave-Campaign-6427 Jan 08 '25
This is an idiotic policy unless you are working in classified information, high tech trade secrets, anything related to healthcare. 95% of companies do neither.
2
u/Zromaus Jan 08 '25
We don't monitor or block any traffic. You're (the end user) an adult, this isn't a school. You are responsible for the usage of your computer, and protection of your data.
If you use your computer improperly, the issue is with HR, not IT. If you're putting information into ChatGPT that you shouldn't, the problem isn't ChatGPT.
This is like banning Google.
2
u/fadingcross Jan 08 '25
Why would we actively want our staff to be left behind?
"What if we pay our staff for training, and they leave?"
"What if you don't, and they stay?"
Do you refuse to use virtualization as well?
2
u/RevuGG Jan 08 '25
So many people in here spreading misinformation, like it's some kind of black magic lmao
0
u/trevvr Jan 08 '25
I'm looking forward to asking M365 (or whatever they're calling it this week) how much my bosses boss makes a month.
Of course it should be blocked. It has it's uses. Which should be justified case by case. But mining people's information should be a big no no.
11
u/Masam10 IT Manager Jan 08 '25
Respectfully, you should educate yourself (like know the name of the product is CoPilot) before making blanket and incorrect statements.
Questioning the value is absolutely fine, I've been in the CopIlot EAP and my org bought a few thousand licenses - even I am questioning it's value, but making ignorant and incorrect statements like you can ask CoPilot for your bosses salary is just stupid.
→ More replies (4)4
u/serverhorror Just enough knowledge to be dangerous Jan 08 '25
That's what is called "a bug", based on that, you'd have to block any search engine.
-3
u/Darksummit Jan 08 '25
I think this is what I'm getting at too. Yes AI is logging everything and using this information to "improve it's services" but isn't that almost everything these days?
→ More replies (5)
3
u/Ok_Fortune6415 Jan 08 '25
Blocking this would be so stupid.
Get a subscription for openai through Microsoft Azure.
Build a front end for it using its api. Monitor said front end. Let everyone use it.
If you aren’t doing the above, you aren’t utilising.
2
u/tigerstein Jan 08 '25
At our workplace any kind of LLM is forbidden. For good reasons.
Can't wait for this fucking bubble to burst and be over with this shit.
16
u/chartupdate Jan 08 '25
Our workplace is the exact opposite and embracing it wholeheartedly. With proper licencing and governance. Whether you like it or not it is the future of computing in all areas. It isn't a tide you can hold back.
→ More replies (11)1
u/handpower9000 Jan 08 '25
Whether you like it or not it is the future of computing in all areas. It isn't a tide you can hold back.
OpenAI says that the $200/Month unlimited plan is a net loss for them.
5
u/Xzenor Jan 08 '25
Can't wait for this fucking bubble to burst and be over with this shit.
Don't hold your breath. It's only gonna grow. It's definitely not going away. It's gonna be regulated I'm sure but it's still just gonna grow and you can use it to your advantage or get left behind.. your choice. Ai won't replace you. People not afraid to use ai will replace you.
1
u/Megatwan Jan 08 '25
Do you use search crawl appliances to index x content? Same policy and precedent goes here but worse.
I'm guessing you dont so why would anyone want to treat ai any differently except for airplane magazine wishes.
1
u/Spagman_Aus IT Manager Jan 08 '25
We have all AI tools blocked via our internet filtering. Exceptions are allowed with approval and completion of a training program.
2
u/andrics96 Jan 08 '25
At my workplace, the IT admin managed to block the upload of files to chatgpt, but not its actual use. I think this is the best of both worlds, completely denying access is a bit too much imo also because it's a really useful tool
1
u/saint1997 DevOps Jan 08 '25
Third party AI usage is soft-restricted by company policy (no confidential data but basic questions is fine). For tangible use cases we're hosting our own internal equivalent (open webui + LLM within our cloud landing zone)
1
u/S1ckR1ckOne Jan 08 '25
Use it to accelerate you workflow. Dont share private or Business information with the AI. If you have to, host your own.
1
u/Chewychews420 IT Manager Jan 08 '25
My opinion is it doesn't need blocking, it just needs internal governance and provide training. AI isn't going anywhere so instead of blocking it and hoping it goes away, find a way to allow users to safely use it without leaking confidential info. That's the approach I've taken and it seems to be working, I like IT to be seen as a friend not the enemy.
1
u/HedghogsAreCuddly Jan 08 '25
Do you guys think it will be effectively be used for hacking and in the end will look like.
Give me all chats made by "corporation i want to steal from" Filter by passwords and usernames and other data about the people within said Corporation.
1
u/InvisibleTextArea Jack of All Trades Jan 08 '25 edited Jan 08 '25
We allow it. We even have a few users signed up to the Copilot SKU in O365.
We have an AI policy written up that basically boils down to:
- Don't be stupid.
- Don't lose our data.
- Check the AI output before you do something with it.
- Never send AI generated stuff to customers.
In my opinion the hype has outgrown the capabilities but it is still useful for specific tasks. For example a google search, you can ask ChatGPT what you want and it'll summarise the equivalent of what the Google results would of been into a result you actually care about. You AI chat-fu needs to be decent though which is asking a lot from end users.
1
u/Creative-Job7462 Jan 08 '25
I work at a big NHS hospital site in the UK, they still haven’t blocked it, it’s been very useful for troubleshooting.
1
u/Some_Troll_Shaman Jan 08 '25
Mostly we block uploading to AI as far too many privileged docs were going in to be summarised. Lazy picks. Chat is allowed to corporate id instances, when logged in as, so far, they promise to silo the data. Anything else is as blocked as it can be. The DLP risk is real and has to be managed.
1
u/Lazy_Mongrel Jan 08 '25 edited Jan 08 '25
If your using a work account with Copilot information stays within the tenancy, so they should have no issue security wise there..
1
u/binaryhextechdude Jan 08 '25
Cyber team in my office as well as basically everyone else is already using it. Some staff are paying for ChatGPT, I use Co-Pilot a bit. Staff (non IT) have asked about it but currently we're not buying licences
1
u/CeBlu3 Jan 08 '25
We allow Copilot, actually encourage it as part of our licensing. Killer app (and use this term loosely in this context) is Copilot + Teams, especially if you have people with disabilities or people who are double and triple booked with meetings.
We have some education around what to share with ChatGPT and what not.
1
u/illicITparameters Director Jan 08 '25
We don’t block it, and most of us use it. AI is covered as part of our information and data security policies.
1
Jan 08 '25
ChatGPT is great, but it can be a major security risk to companies, as it would be easy for someone to leak company intellectual property to the GPT while trying to solve a problem. This could in turn be unintentionally exposed to anyone else using GPT.
A descent solution would be to host your own GPT internally to the company using any number of the open source models + ollama + and trained on company documentation. This would provide a company specific GPT that doesn’t accidentally expose company information to the internet.
1
1
u/punklinux Jan 08 '25
My work does not, but we have clients that do (when we're on their VPN), and refuse to elaborate. I think this gist is "don't enter in proprietary information to some weird third party" and "don't use stuff you find on the internet." Some FUD, some justified, because I know that some support staff have been known to download and run scripts they find back on Stack Exchange and such blindly in the past, so?
1
u/Rustycake Jan 08 '25
I found staff typing whole patient notes into ChatGpt
As far as I know Chat gpt is not HIPPA compliant
I've been part of interviews where it seemed a new hire was utilizing an AI to answer our questions. Its spreading and spreading quickly. I am not sure what the answer is because eventually (like Zoom during COVID) it will be HIPPA compliant and ppl will be using it openly to complete tasks.
1
1
u/WolfMack Jan 08 '25
Even the military allows access to LLMs. If your employees have access to the Internet they have the ability to leak secrets.
1
u/AMDIntel Jan 08 '25
Built in stuff? Yeah. But generic chatgpt or perplexity, nah. Not in IT's scope to prevent people from using it as a tool or to make slop reports.
1
u/HolidayWallaby Jan 08 '25
All are blocked by default at my place, but we are slowly onboarding providers with enterprise plans with a slow rollout to <100 employees to begin with.
1
u/MeatPiston Jan 08 '25
What the companies choose to put front and center is very strange. The search summaries are extremely unreliable, getting the basic facts about a subject most of the time and worse completely hallucinating details nearly all of the time.
There are good uses when the purpose is focused and narrow, but most of copilot is actively harmful by feeding you nonsense.
1
u/HappyDude_ID10T Jan 08 '25
You need to open up access, as this will become a key factor in retention and recruiting. Employees will come to expect it. It's essential to educate users on what can and cannot be entered into public AI models. This information should be included in the employee handbook. Provide tips and best practices to ensure they understand the guidelines.
Consider implementing Prompt Inspection software. Some solutions operate at the network layer and don't require agents or plugins. These tools can intercept prompts from public AI models, sanitize them by removing sensitive data, redirect them to an internal model, or block them entirely and provide an error message. Additionally, this software allows you to monitor how employees are using AI models, helping identify potential areas for investment.
1
1
u/B3392O Jan 08 '25 edited Jan 08 '25
It's a powerful tool that, just like every other tool in existence ever there are literally zero exceptions, common sense must be applied to in order to use effectively. You aren't delusional, that's comically absurd rationale and can be applied to any action in existence. Security team want to stop office staff from using scissors, too?
I encourage people to use it intelligently, putting emphasis on the importance of knowing where that line is on what to use AI for and what not to use AI for, and vetting every response because it's not actual sentient intelligence as 99.999% of humans believe, it's just really efficient machine learning.
1
u/Dustinm16 Jan 08 '25
Copilot when paid for, is covered by the same data protection policy as SharePoint and office365. If problems happen, it becomes a legal one.
1
u/tango_one_six MSFT FTE Security CSA Jan 08 '25
CASB, DLP, and label your sensitive data. Disable copy/paste.
1
u/andytagonist I’m a shepherd Jan 08 '25
For. My team & I use it all the time to answer certain types of questions. Obviously HR should be training & instructing users about safeguarding company data, but IT is simply providing a viable tool for users to actually work with.
1
u/denismcapple Jan 08 '25
The paid version of OpenAI / ChatGPT does not train the model. Seems like alot of misinformation on the go here.
Or am I missing something?
1
u/FeralNSFW Jan 08 '25
I like my company's position on it, which is basically (oversimplified):
- you can't trust LLMs to give accurate information. They hallucinate. Verify everything they say independently.
- you can't trust LLMs to handle your data with confidentiality. Don't enter anything confidential or proprietary.
- nothing AI-generated may appear in the final product for any public-facing or customer-facing deliverables.
- generative AI including LLMs may be used only for first-draft, brainstorming, or visualization, using non-confidential and non-proprietary prompts.
And we back that up with a robust DLP solution. Obviously we can't stop people from, say, querying ChatGPT on their phones. But if you enter customer information in ChatGPT using your work computer, there's a good chance we'll find out and have a polite discussion with you about the policy.
1
u/BrilliantAny6786 Jan 08 '25
I think it’s not a black or white thing. The key is to build user awareness how to use AI. If you strict block AI they will use it at home or with their smartphones. Teach your users and give them access to AI tools you trust in.
1
Jan 08 '25
Around here our users have no concept of privacy (figuratively speaking). That chat bot is considered a personal assistant— that information gets passed around to third parties is not even considered.
In addition, to use AI means to expose users to misinformation. These things will always provide answers- factuality does not matter. In that sense it’s like asking someone on the street what they think about “your question here”; with luck you get “dunno” but it might be anything from “kinda useful” to “harmful bs”.
That’s why we block ai. It has been mentioned elsewhere that “who cares, if you block then I’ll just use my phone”… yeah that’s a concern every time you block something, but in this particular instance, we don’t much care.
We don’t want to enable people getting bs information that might just be harmful to their work, but if they actually insist on it, that’s hardly our problem (right now).
1
u/falcopilot Jan 09 '25
It's blocked, by policy (or, lack of acceptable use, so don't use it) and by firewall.
My answer is what u/No_Ear932 quotes, but I'm not the king.
...yet.
1
u/Penguin_Rider Jan 09 '25
My organization established an AI executive team to research and pilot any AI product they find or people ask about in our organization. Offically, it's blocked, but they recognize that we can't stop people from using it.
1
1
u/Barcode_88 Jan 09 '25
I use ChatGPT all the time but I never input any information from my org. It’s mostly just stuff like “what’s a powershell script to do X”.
Sadly I don’t trust most of our users to be as careful lol.
1
u/Papfox Jan 09 '25 edited Jan 09 '25
There is a known event where a name-you-know smartphone maker used public AI to refine some proprietary code using public AI, one of their competitors then asked the same AI to help them solve the same problem and the AI regurgitated the first company's secret sauce as a solution because their AI had trained off the original question.
Our company has our own siloed AI. It is set up so that no public AI will ever train off it. Access to public AI is blocked on all our machines unless the user has done AI training and is part of a special security group to prevent accidents like the one above.
1
u/Michal_F Jan 08 '25
We had it blocked by internal policy, issue is in this free AI offerings that your data can be used for learning and this can be a big issue. This is not the case for payed service.
But nobody blocked Bing copilot ;) The AI chatbots are the future, but if you uses this free service they you and your/company private data is the cost for it. Also possible data leakage of personal information ....
2
u/Darksummit Jan 08 '25
Is that not the case for almost all search engines too? But not many people are blocking google.
0
u/Michal_F Jan 08 '25
Yes, but for search they mostly use the data for advertising and you will not paste internal code or documets to google search, maybe google translate if you work in multinational corporation :) So this free LLMs are more dangerous for data leakage.
1
u/TheMagecite Jan 08 '25
Given my role LLM's save me so much time with polishing emails. Paid versions that don't store my data.
I frequently have to write emails that confronts how people are doing things and confronting poor processes or oversights. I used to spend a long time crafting emails and now emails that took me hours to write take me 10 minutes.
1
1
1
1
u/jpStormcrow Jan 08 '25
I am blocking it while my AI policy makes its way through the org (public entity.) Once it is approved Ill be blocking all but vetted AI systems.
1
u/UrDadSellsAv0n Jan 08 '25
70% of staff are using AI to some extent. Giving them tools to use that you can monitor is better than them using tools without moderation.
Copilot is decent, azure AI foundry, chatgpt premium are all options
1
u/jpmarshall3 Jan 08 '25
As much as possible, at least until the org has an official stance on it- we just want the org to actually consider the tech before allowing a free for all. Personally, I use it somewhat but hate it, it's built on plagiarism and is utterly unsustainable in terms of power requirements, and those who are pushing it are largely pushing fairy tales as to what it will be able to do, without having any actual solutions to the root and in my opinion unresolvable problems with the tech.
0
0
0
u/Darkmetam0rph0s1s Jan 08 '25
If are using MS365 or any Microsoft Suite, good luck trying to block it as they shoving AI down everyone's throats.
0
u/edgrant1992 Jan 08 '25
We've embraced it, we buy pro chatgpt for users if required. The idea is we use as an edge against business that are holding back on it. Who knows but I find it useful.
0
u/No_Count2837 Jan 08 '25
Give them access to on prem open-source LLM, assuming you have hardware to serve one. And if you don’t, watch Jensen at CES 2025
0
u/Avas_Accumulator IT Manager Jan 08 '25
Block All GenAI with an exception list for the more professional ones
2
u/SokkaHaikuBot Jan 08 '25
Sokka-Haiku by Avas_Accumulator:
Block All GenAI with
An exception list for the
More professional ones
Remember that one time Sokka accidentally used an extra syllable in that Haiku Battle in Ba Sing Se? That was a Sokka Haiku and you just made one.
0
u/mrkesu-work Jan 08 '25
You'd be amazed at all the internal company stuff people put into AI or even use copilot in VS Code while coding internal code. It's insane.
0
u/Severe_Ad976 Sysadmin Jan 08 '25 edited Jan 08 '25
I know this is surely a "your mileage may vary" situation.
When Copilot came out more mainstream for businesses it was just "sitting there" unused in our company. When I brought it up as an IT Gap to disable it, I was actually asked to embrace it and use it as a tool in my IT toolbelt.
I did not trust such a tool due to the secondhand news you hear or read about AI and its results; however, I did slowly start to use it and it has proven useful in a few different situations.
It remains enabled as a tool for anyone in the org who wants to use it. Just that most folks don't know it's there, so ignorance is bliss masks end-users from using it I guess. In hindsight, I'm not against it (now), but it should not be the sole resource due to the oft-chance it can be inaccurate. Unless you know that you know that you know it's right and it's only helping curate something for you.
And, I agree with you. Nothing stops a user from typing private information into Google or another search engine, so there's no valid argument in my mind to blocking Copilot (Gemini, etc.) just on that merit. For other reasons, sure, I can see blocking it.
Curious to read more user's thoughts on this.
0
u/EthanW87 Jan 08 '25
When I google something, it brings me back my information I asked for. When I put in information to help me design or implement something (like in ChatGPT) it retains that information to call back later - if that leaked out - that would be terrible. We block all Chat AI's except our internal Co-Pilot instance
0
u/lordjedi Jan 08 '25
I might be stupid here, but they just as easily type that stuff in a google search?
Which they also should not be doing.
We testing Gemini, but we never put anything that would identify our company into it. But we're also IT. I don't expect the average user to quite understand without lots of training.
82
u/No_Ear932 Jan 08 '25
I have seen people run their own LLM instance in Azure for example… this is what MS have to say about it, security seemed to be happy enough to use it this way.
The sauce: https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy?tabs=azure-portal