Computers are getting better at writing their own code but software engineers may not need to worry about losing their jobs just yet.DeepMind, a U.K. artificial intelligence lab acquired by Google in 2014, announced Wednesday that it has created a piece of software called AlphaCode that can code just as well as an average human programmer. The London-headquartered firm tested AlphaCode’s abilities in a coding competition on Codeforces — a platform that allows human coders to compete against one another. “AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions,” the DeepMind team behind the tool said in a blogpost.But computer scientist Dzmitry Bahdanau wrote on Twitter that human-level coding is “still light years away.” “The system ranks behind 54.3% participants,” he said, adding that many of the participants are high school or college students who are just learning their problem-solving skills. Bahdanau said most people reading his tweet could “easily train to outperform AlphaCode.”Researchers have been trying to teach computers to write code for decades but the concept has yet to go mainstream, partly because the AI tools that are meant to write new code have not been versatile enough.An AI research scientist, who preferred to remain anonymous as they were not authorized to talk publicly on the subject, told CNBC that AlphaCode is an impressive technical achievement, but a careful analysis is required of the sort of coding tasks it does well on, versus the ones it doesn’t.The scientist said they believe AI coding tools like AlphaCode will likely change the nature of software engineering roles somewhat as they mature, but the complexity of human roles means machines won’t be able to do the jobs in their entirety for some time.“You should think of it as something that could be an assistant to a programmer in the way that a calculator might once have helped an accountant,” Gary Marcus, an AI professor at New York University, told CNBC. “It’s not one-stop shopping that would replace an actual human programmer. We are decades away from that.”DeepMind is far from the only tech company developing AI tools that can write their own code.Last June, Microsoft announced an AI system that can recommend code for software developers to use as they work. The system, called GitHub Copilot, draws on source code uploaded to code-sharing service GitHub, which Microsoft acquired in 2018, as well as other websites. Microsoft and GitHub developed it with help from OpenAI, an AI research start-up that Microsoft backed in 2019. The GitHub Copilot relies on a large volume of code in many programming languages and vast Azure cloud computing power.Nat Friedman, CEO of GitHub, describes GitHub Copilot as a virtual version of what software creators call a pair programmer — that’s when two developers work side-by-side collaboratively on the same project. The tool looks at existing code and comments in the current file, and it offers up one or more lines to add. As programmers accept or reject suggestions, the model learns and becomes more sophisticated over time. The software makes coding faster, Friedman told CNBC. Hundreds of developers at GitHub have been using the Copilot feature all day while coding, and the majority of them are accepting suggestions and not turning the feature off, Friedman said.In a separate research paper published on Friday, DeepMind said it had tested its software against OpenAI’s technology and it had performed similarly. Samim Winiger, an AI researcher in Berlin, told CNBC that every good computer programmer knows that it is essentially impossible to create “perfect code.”“All programs are flawed and will eventually fail in unforeseeable ways, due to hacks, bugs or complexity,” he said.“Hence, computer programming in most critical contexts is fundamentally about building ‘fail safe’ systems that are ‘accountable’.”In 1979, IBM said “computers can never be held accountable” and “therefore a computer must never make a management decision.”Winiger said the question of the accountability of code has been largely ignored despite the hype around AI coders outperforming humans.“Do we really want hyper-complex, intransparent, non-introspectable, autonomous systems that are essentially incomprehensible to most and uncountable to all to run our critical infrastructure?” he asked, pointing to the finance system, food supply chain, nuclear power plants and weapons systems.【小题1】What do we learn about AlphaCode?A.a U.K. artificial intelligence lab acquired by GitHub created it.B.AlphaCode will likely change the nature of software engineering roles somewhat now.C.It’s a one-stop shopping that would replace an actual human programmer.D.It’s a piece of software that can code just as well as a plain human programmer.【小题2】What is the main point of IBM’s view in 1979 according to this passage?A.The question of the accountability of code should be largely ignored.B.A computer must never make a management decision because they can never be held accountable.C.We would let systems that are essentially incomprehensible to most to run our critical infrastructure.D.All programs are flawed and will eventually fail in unforeseeable ways.【小题3】According to the passage, GitHub Copilot couldn’t ________.A.accept or reject suggestionsB.look at existing code in the current fileC.offer up one or more lines to addD.make coding faster【小题4】Which of the following can be the best title for the text?A.Engineers may need to worry about losing their jobs.B.Machines are getting better at writing their own code but human-level is ‘light years away’.C.AlphaCode is an impressive technical achievement.D.Microsoft announced an AI system that can recommend code for software developers.
Computers are getting better at writing their own code but software engineers may not need to worry about losing their jobs just yet.
DeepMind, a U.K. artificial intelligence lab acquired by Google in 2014, announced Wednesday that it has created a piece of software called AlphaCode that can code just as well as an average human programmer. The London-headquartered firm tested AlphaCode’s abilities in a coding competition on Codeforces — a platform that allows human coders to compete against one another. “AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions,” the DeepMind team behind the tool said in a blogpost.
But computer scientist Dzmitry Bahdanau wrote on Twitter that human-level coding is “still light years away.” “The system ranks behind 54.3% participants,” he said, adding that many of the participants are high school or college students who are just learning their problem-solving skills. Bahdanau said most people reading his tweet could “easily train to outperform AlphaCode.”
Researchers have been trying to teach computers to write code for decades but the concept has yet to go mainstream, partly because the AI tools that are meant to write new code have not been versatile enough.
An AI research scientist, who preferred to remain anonymous as they were not authorized to talk publicly on the subject, told CNBC that AlphaCode is an impressive technical achievement, but a careful analysis is required of the sort of coding tasks it does well on, versus the ones it doesn’t.The scientist said they believe AI coding tools like AlphaCode will likely change the nature of software engineering roles somewhat as they mature, but the complexity of human roles means machines won’t be able to do the jobs in their entirety for some time.“You should think of it as something that could be an assistant to a programmer in the way that a calculator might once have helped an accountant,” Gary Marcus, an AI professor at New York University, told CNBC. “It’s not one-stop shopping that would replace an actual human programmer. We are decades away from that.”
DeepMind is far from the only tech company developing AI tools that can write their own code.
Last June, Microsoft announced an AI system that can recommend code for software developers to use as they work. The system, called GitHub Copilot, draws on source code uploaded to code-sharing service GitHub, which Microsoft acquired in 2018, as well as other websites. Microsoft and GitHub developed it with help from OpenAI, an AI research start-up that Microsoft backed in 2019. The GitHub Copilot relies on a large volume of code in many programming languages and vast Azure cloud computing power.
Nat Friedman, CEO of GitHub, describes GitHub Copilot as a virtual version of what software creators call a pair programmer — that’s when two developers work side-by-side collaboratively on the same project. The tool looks at existing code and comments in the current file, and it offers up one or more lines to add. As programmers accept or reject suggestions, the model learns and becomes more sophisticated over time. The software makes coding faster, Friedman told CNBC. Hundreds of developers at GitHub have been using the Copilot feature all day while coding, and the majority of them are accepting suggestions and not turning the feature off, Friedman said.In a separate research paper published on Friday, DeepMind said it had tested its software against OpenAI’s technology and it had performed similarly. Samim Winiger, an AI researcher in Berlin, told CNBC that every good computer programmer knows that it is essentially impossible to create “perfect code.”“All programs are flawed and will eventually fail in unforeseeable ways, due to hacks, bugs or complexity,” he said.“Hence, computer programming in most critical contexts is fundamentally about building ‘fail safe’ systems that are ‘accountable’.”
In 1979, IBM said “computers can never be held accountable” and “therefore a computer must never make a management decision.”Winiger said the question of the accountability of code has been largely ignored despite the hype around AI coders outperforming humans.
“Do we really want hyper-complex, intransparent, non-introspectable, autonomous systems that are essentially incomprehensible to most and uncountable to all to run our critical infrastructure?” he asked, pointing to the finance system, food supply chain, nuclear power plants and weapons systems.
【小题1】What do we learn about AlphaCode?| A.a U.K. artificial intelligence lab acquired by GitHub created it. |
| B.AlphaCode will likely change the nature of software engineering roles somewhat now. |
| C.It’s a one-stop shopping that would replace an actual human programmer. |
| D.It’s a piece of software that can code just as well as a plain human programmer. |
| A.The question of the accountability of code should be largely ignored. |
| B.A computer must never make a management decision because they can never be held accountable. |
| C.We would let systems that are essentially incomprehensible to most to run our critical infrastructure. |
| D.All programs are flawed and will eventually fail in unforeseeable ways. |
| A.accept or reject suggestions | B.look at existing code in the current file |
| C.offer up one or more lines to add | D.make coding faster |
| A.Engineers may need to worry about losing their jobs. |
| B.Machines are getting better at writing their own code but human-level is ‘light years away’. |
| C.AlphaCode is an impressive technical achievement. |
| D.Microsoft announced an AI system that can recommend code for software developers. |
题目解答
答案

解析
考查要点:
- 细节理解:需根据文章具体内容判断选项的正误,尤其注意时间、主体、动作等关键信息。
- 观点定位:需快速定位文中特定人物或机构的观点,并理解其核心含义。
- 逻辑推理:通过上下文推断未直接说明的信息,如标题概括需结合全文主旨。
解题核心思路:
- 关键词定位:通过问题中的关键词(如“AlphaCode”“IBM”“GitHub Copilot”)快速锁定相关段落。
- 排除法:结合选项与原文的匹配度,排除明显矛盾或过度推断的选项。
- 主旨提炼:标题题需抓住文章的核心矛盾(如“机器编码能力提升但人类仍占主导”)。
小题1
问题:关于AlphaCode,我们能了解到什么?
关键信息:
- 选项D对应文章首段“AlphaCode can code just as well as an average human programmer”(与普通人类程序员水平相当)。
- 排除其他选项:
- A项错误(AlphaCode由DeepMind而非GitHub开发);
- B项错误(文中提到“somewhat”但未明确“now”);
- C项错误(文中明确AI无法完全替代人类程序员)。
小题2
问题:IBM在1979年的主要观点是什么?
关键信息:
- 选项B对应文中“computers can never be held accountable”和“A computer must never make a management decision”(计算机不可问责,不能做管理决策)。
- 排除其他选项:
- A项错误(IBM认为问责问题需重视);
- C、D项属于后文Samim Winiger的观点,非IBM原话。
小题3
问题:GitHub Copilot不能做什么?
关键信息:
- 选项A错误:GitHub Copilot本身是工具,无法自主“接受或拒绝建议”,而是由开发者操作。
- 选项B、C、D均与文中描述一致(如“look at existing code”“offer lines to add”“make coding faster”)。
小题4
问题:最佳标题是什么?
关键信息:
- 选项B最全面,体现文章核心矛盾(机器编码能力提升但人类水平仍遥远)。
- 排除其他选项:
- A项片面(未提及技术进步);
- C、D项仅涉及部分细节(AlphaCode或GitHub Copilot)。