If you scroll down to "Allow GitHub to use my data for AI model training" in GitHub settings, you can enable or disable it. However, what really gets me is how they pitch it like it’s some kind of user-facing feature:
Enabled = You will have access to the feature
Disabled = You won't have access to the feature
As if handing over your data for free is a perk. Kinda hilarious.
It’s not so bad, there’s no double negative and it’s not a confusing “switch” that is always ambiguous as to whether it’s enabled or not.
In contrast when you create a a GCS bucket it uses a checkmark for enabling “public access prevention”. Who designed that modal? It takes me a solid minute to figure out if I’m publishing private data or not.
the framing is so manipulative. "you will have access to the feature" — what feature? the feature of giving away my data? at least be honest and call it what it is. i turned it off immediately but i wonder how many people just leave it because the wording makes it sound like you lose something.
I went to check on this and I have everything copilot related disabled and in the two bars that measure usage my Copilot Chat usage was somehow in 2%, how is this possible?
Before anyone comes to me to sell me on AI, this is on my personal account, I have and use it in my business account (but it is a completely different user account), I just make it a point to not use it in my personal time so I can keep my skills sharp.
I wonder if that’s it! I occasionally do some code search on GitHub and then remember it doesn’t work well and go back to searching in the IDE. I usually need to look into not the main branch because I do a lot of projects that have a develop branch where things actually happen. But that would explain so I guess this is it.
If you're taking about the quota bar. That is only measuring your premium request usage (models with a #.#x multiplier next to the name). If you only use the free models and code completion you won't actually consume any "usage". If you use AI code review that consumes a single request (now). Same with the Github Copilot web chat, if you use a free model, it doesn't count, if you use a premium model you get charged the usage cost.
Previously, big tech used to still somehow find loopholes for GPL and licenses still had some value.
Nowadays, It genuinely feels a lot less because there are now services who will re-write the code to prevent the license.
Previously, I used to still think that somewhat non propreitory licenses like the SSPL license etc. might be interesting approaches but I feel like they aren't that much prone to this either now anymore.
I guess freedom of study and use may include also training AI, but would be cool if all the derivate work, as AI models and generated code from AI models should be licensed as GPL, layers needed here
I guess the "perk" is that maybe their models get retrained on your data making them slightly more useful to you (and everyone else) in the future? idk
No, it’s not. Please think like a developer and not like someone playing amateur gotcha journalist on social media. Feature flags are (ab)used in this way all the time. What is a feature? What is a feature flag? It’s like asking what authorisation is vs all your other business rules. There’s grey area.
> On April 24 we'll start using GitHub Copilot interaction data for AI model training unless you opt out. Review this update and manage your preferences in your GitHub account settings.
Now
"Allow GitHub to use my data for AI model training" is enabled by default.
I always thought "opt-in" (not "opt in") meant something you have to actively choose to enable; otherwise, it stays off. So calling something "opt-in by default" sounds like a misnomer to me.
But English is not my first language so please correct me if I'm wrong.
> Why are you only using data from individuals while excluding businesses and enterprises?
> Our agreements with Business and Enterprise customers prohibit using their Copilot interaction data for model training, and we honor those commitments. Individual users on Free, Pro, and Pro+ plans have control over their data and can opt out at any time.
Ah, so when the inevitable "bug" appears, and we all learn that you've completely failed to honor anything, what will be your "commitment" then? An apology and a few free months?
Time to start pushing for a self hosted git service again.
Yes - not impressed at all that this is opt-in default for business users. We have a policy in place with clients that code we write for them won’t be used in AI training - so expecting us to opt out isn’t an acceptable approach for a business relationship where the expectation is security and privacy.
It is not opt-in by default for business users. The feature flag doesn't show in org policies and github states that it's not scoped to business users.
Gah - you’re right - but given that I don’t use personal copilot - but I do manage an organisation that gives copilot to some of our developers AND I was sent an email this evening making no mention at all of business copilot being excluded it could definitely have been communicated better…
> Again, your organization's Copilot interaction data is not included in model training under this new policy, but we are excited for you to enjoy the product improvements it will unlock.
Reading the github blog post "If you previously opted out of the setting allowing GitHub to collect this data for product improvements, your preference has been retained—your choice is preserved, and your data will not be used for training unless you opt in."
We are not. The reason we wanted to announce early was so that folks had plenty of time to opt-out now. We've also added the opt-out setting even if you don't use Copilot so that you can opt-out now before you forget and then if you decide to use Copilot in the future it will remember your preference.
Would you be able to comment on https://news.ycombinator.com/item?id=47522876, i.e. explain the legal basis for this change for EU based users? If there is none, you may have to expect that people will exercise their right to lodge a complaint with a supervisory authority.
What did everyone expect? I can't understand this community's trust of microsoft or startups. It's the typical land grab: start off decent, win people over, build a moat, then start shaking everybody down in the most egregious way possible.
It's just unusual how quickly they're going for the shakedown this time
If they turned it on for business orgs, that would blow up fast. The line between "helpful telemetry" and "silent corporate data mining" gets blurry once your team's repo is feeding the next Copilot.
People are weirdly willing to shrug when it's some solo coder getting fleeced instead of a company with lawyers and procurement people in the room. If an account tier is doing all the moral cleanup, the policy is bad.
The individual/corporate asymmetry you're describing is standard across B2B SaaS. Slack, Notion, and Figma all include ML training carve-outs in enterprise DPAs that free users don't get. GitHub isn't doing anything unusual here — they're just doing it with code, which feels more sensitive than documents or messages because it might literally be your employer's IP you're working on from a personal account.
The interesting nuance is the enforcement mechanism. martinwoodward clarified below that exclusion happens at the user level, not the repo level: if you're a member of a paid org, your interaction data is excluded even on a free personal Copilot account. That's actually more protective than I expected — it handles the contractor case where someone works across multiple repos of varying org types.
The remaining ambiguity is temporal: if someone leaves an org, do their historical interactions get retroactively included? Policy answers to that question are hard to verify and even harder to audit.
Separate fun fact: Gemini CLI blocks env vars with strings like 'AUTH' in the name. They have two separate configuration options that both let you allow specific env vars. Neither work (bad vibe coding). Tried opening an issue and a PR, and two separate vibe-coding bots picked up my issue and wrote PRs, but nobody has looked at them. Bug's still there, so can't do git code signing via ssh agent socket. Only choice is to do the less-secure, not-signed git commits.
On top of that, Gemini 3 refuses to refactor open source code, even if you fork it, if Gemini thinks your changes would violate the spirit of the intent of the original developers in a safety/security context. Even if you think you're actually making it more secure, but Gemini doesn't, it won't write your code.
OpenCode has a plugin that lets you add an .ignore file (though I think .agentignore would be better). The problem is that, even though the plugin makes it so the agent can't directly read the file, there's no guarantee the agent will try to be helpful and do something like "well I can't read .envrc using my read tool, so let me cat .envrc and read it that way".
Thanks to Github and the AI apocalypse, all my software is now stored on a private git repository on my server.
Why would I even spend time choosing a copyleft license if any bot will use my code as training data to be used in commercial applications? I'm not planning on creating any more opensource code, and what projects of mine still have users will be left on GH for posterity.
If you're still serious about opensource, time to move to Codeberg.
Made the same choice, my open source projects with users are in maintenance mode or archived. New projects are released via SaaS, compiled artifacts or not at all.
I scratch my open source itch by contributing to existing language and OS projects where incremental change means eventually having to retrain models to get accurate inference :)
What is the legal basis of this in the EU? Ignoring the fact they could end up stealing IP, it seems like the collected information could easily contain PII, and consent would have to be
> freely given, specific, informed and unambiguous. In order to obtain freely given consent, it must be given on a voluntary basis.
It breaks GDPR easily: GDPR enforces you to comply with opt-out by default, no workaround by prefilling before hitting submit.
While some think this applies only to personal data, then yes. But it takes only one line of code to use my phone number for testing while I test locally a register form in the application I'm developing.
Once it gets sent to Copilot I can threaten with legal action if they are not taking it down.
They didn't even link the setting in their email. They didn't even name it specifically, just vaguely gestured toward it. Dark patterns, but that's Microslop for ya
They do not make it very simple to opt out. That is false.
On Android for instance I invite you to use the GitHub app and modify your opt-in or opt outside settings... You will find that nothing works on the settings page once you actually find the settings page after digging through a couple of layers and scrolling about 2 ft.
I appreciated the notification at the top of the screen because it prompted me to disable every single copilot feature I possibly could from my account. I also appreciated Microsoft for making Windows 11 horrible so I could fall back in love with Linux again.
Who in their right mind will opt into sharing their code for training? Absolutely nobody. This is just a dark pattern.
Btw, even if disabled, I have zero confidence they are not already training on our data.
I would also recommend to sprinkle copyright noticed all over the place and change the license of every file, just in case they have some sanity checks before your data gets consumed - just to be sure.
Serious question: let's say I host my code on this platform which is proprietary and is for my various clients. Who can guarantee me that AI won't replicate it to competitors who decide to create something similar to my product?
If the code is ever visible to anyone else ever, you have no guarantee. If it’s actually valuable, you have to protect it the same way you’d protect a pile of gold bars.
What does “my code...for my clients” mean (is it yours or theirs)? If it’s theirs let them house it and delegate access to you. If they want to risk it being, ahem...borrowed, that’s their business decision to make.
If it’s yours, you can host it yourself and maintain privacy, but the long tail risk of maintaining it is not as trivial as it seems on the surface. You need to have backups, encrypted, at different locations, geographically distant, so either you need physical security, or you’re using the cloud and need monitoring and alerting, and then need something to monitor the monitor.
It’s like life. Freedom means freedom from tyranny, not freedom from obligation. Choosing a community or living solo in the wilderness both come with different obligations. You can pay taxes (and hope you’re not getting screwed, too much), or you can fight off bears yourself, etc.
> If you have been granted a free access to Copilot as a verified student, teacher, or maintainer of a popular open source project, you won’t be able to cancel your plan.
It’s not clear to me how GitHub would enforce the “we don’t use enterprise repos” stuff alongside “we will use free tier copilot for training”.
A user can be a contributor to a private repository, but not have that repository owner organisation’s license to use copilot. They can still use their personal free tier copilot on that repository.
How can enterprises be confident that their IP isn’t being absorbed into the GH models in that scenario?
Quite simply, that's just a matter of the corporate internal policy and its (lack of) enforcement. This problem is just a subset of the wider IP breach with some people happily feeding their work documents into the free tier of ChatGPT.
We do not train on the contents from any paid organization’s repos, regardless of whether a user is working in that repo with a Copilot Free, Pro, or Pro+ subscription. If a user’s GitHub account is a member of or outside collaborator with a paid organization, we exclude their interaction data from model training.
For private repositories under a personal account, if the repo owner has opted out of model training but a collaborator has not, would the collaborator's Copilot interactions with that repo still be used for training?
On my Android phone I was able to change the setting using Firefox by logging into GitHub and not allowing it to launch the GitHub app.
I was unable to change the setting when I used the GitHub app to open up the web page in a container.. button clicks weren't working. Quite frustrating.
They've had ample access to the final output - our code, but they still hope with enough data on HOW we work they can close the agentic gap and finally get those stinky, lazy humans that demand salary out of the loop.
I just checked my Github settings, and found that sharing my data was "enabled".
This setting does not represent my wishes and I definitely would not have set it that way on purpose. It was either defaulted that way, or when the option was presented to me I configured it the opposite of how I intended.
Fortunately, none of the work I do these days with Copilot enabled is sensitive (if it was I would have been much more paranoid).
I'm in the USA and pay for Copilot as an individual.
Shit like this is why I pay for duck.ai where the main selling point is that the product is private by default.
They use data from the poor student tier, but arguably, large corporates and businesses hiring talented devs are going to create higher quality training data. Just looking at it logically, not that I like any of this...
I have GitHub Copilot Pro. I don't believe I signed up for it. I neither use it nor want it.
1. A lot of settings are 'Enabled' with no option to opt out. What can I do?
2. How do I opt out of data collection? I see the message informing me to opt out, but 'Allow GitHub to use my data for AI model training' is already disabled for my account.
Hey David - if you want to send me (martinwoodward at github.com) details of your GitHub account I can take a look. At a guess I suspect you are one of the many folks who qualified for GitHub Copilot Pro for free as a maintainer of a popular open source project.
Sounds like you are already opted out because you'd previously opted out of the setting allowing GitHub to collect this data for product improvements. But I can check that.
Note, it's only _usage_ data when using Copilot that is being trained on. Therefore if you are not using Copilot there is no usage data. We do not train on private data at rest in your repos etc.
So, how does this work with source-available code, that’s still licensed as proprietary - or released under a license which requires attribution?
If someone takes that code and pokes around on it with a free tier copilot account, GitHub will just absorb it into their model - even if it’s explicitly against that code’s license to do so?
Finally. The option for me to enable Copilot data sharing has been locked as disabled for some time, so until now I couldn't even enable it if I wanted to.
That's me. Frankly, looking at just uninstalling VSCode because Copilot straight-up gets in the way of so much, and they stopped even bothering with features that are not related to it (with one exception of native browser in v112, which, admittedly, is great)
If you previously opted out of the setting allowing GitHub to collect data for product improvements, your preference has been retained here. We figured if you didn't want that then you definitely wouldn't want this..
> Content from your issues, discussions, or private repositories at rest. We use the phrase “at rest” deliberately because Copilot does process code from private repositories when you are actively using Copilot. This interaction data is required to run the service and could be used for model training unless you opt out.
Sounds like it's even likely to train on content from private repositories. This feels like a bit of an overstep to me.
Does it even matter? They trained AI on obviously copyrighted and even pirated content. If this change is legally significant and a legal breach, the existence of all models and all AI businesses also is illegal.
It might or might not be legal, but it seems materially worse to screw over your direct customers than to violate the social-contracty nature of copyright law. But hey ho if you're not paying then you're the product, as ever was.
If this doesn't sound bad enough, it's possible that Copilot is already enabled. As we know this kind of features are pushed to users instead of being asked for.
Maybe it's already active in our accounts and we don't realize it, so our code will be used to train the AI.
Now we can't be sure if this will happen or not, but a company like GitHub should be staying miles away from this kind of policy. I personally wouldn't use GitHub for private corporate repositories. Only as a public web interface for public repos.
So I do all the work of thinking about how to do something, and as soon as I tell Copilot about it, not it's in the training data and anyone can ask the LLM and it'll tell them the solution I came up with? Great. I'm going to cancel.
> From April 24 onward, interaction data—specifically inputs, outputs, code snippets, and associated context—from Copilot Free, Pro, and Pro+ users will be used to train and improve our AI models unless they opt out.
Now is the time to run off of GitHub and consider Codeberg or self hosting like I said before. [0]
Codeberg doesn't support non OSS and I'd rather just have one 'git' thing I have to know for both OSS and private work. So it's not a great option, IMO. Self-hosting also for other reasons.
I'm not sure there are any good GitHub alternatives. I don't trust Gitlab either. Their landing page title currently starts with "Finally, AI". Eek.
It's an option but I can't really take the platform seriously when the owner removes content based on his personal whims. He currently removes crypto projects because of their 'social ills'. I don't work on crypto, but he might start deleting AI projects for the same reason, say.
Please don’t strawman me, I asked completely different question.
It’s not about being grateful or something, but that many people (devs) are too concerned about their code being stolen as if they’ve come up with something unique and the LLMs are some kind of database (which it isn’t).
At the end of the day we’re going to be using AI to write all the code, many of us already doing that. And if some GitHub copilot model would be better - we’re getting more quality code that is generally available for next pretraining runs (for your and other models). Some would even switch to copilot if it’s good.
Enabled = You will have access to the feature
Disabled = You won't have access to the feature
As if handing over your data for free is a perk. Kinda hilarious.
reply