

People have different levels of “nerves” as others, and it kind of sounds like you may filtering out applicants on an arbitrary metric (how nervous a person may be in an interview). Don’t have enough information about your process to say for sure (obviously), but it may be something to think about. Interviews can be very high-stakes for some people (such as “I may become homeless”), and not for others (“my parents are rich”). After hired, it’s not necessarily as high-staked, and toy problems aren’t what SEs work on day-to-day.
I’ve tried Copilot for a while and played around with Cursor for a bit. I was better and faster without Copilot due to sometimes not paying enough attention of the lines it would generate. This would cause subtle bugs that took a long time to debug. Cursor just produced unmaintainable code-bases that I had no knowledge of, and to make major changes, would be faster for me to just rewrite it from scratch. The act of typing gives me time to think more about what I’m doing or am going to do, while Copilot generations are distracting and break my thought processes. I work best with good LSP tooling and sometimes AI chatbots (mostly just for customized example snippets for libraries or frameworks I’m unfamiliar with; though that has its own problems because the LLMs knowledge is out of date a lot) that don’t directly modify my code.