Any AI that decides to go evil and fight humans would be unstoppable for at least two reasons.
First, AI’s thinking power is too fast. A true AI would be able to calculate at speeds impossibly fast, much faster than any human could hope to. In chess, computers are now essentially unbeatable because they can predict and compute faster than a person. Self-driving cars are already safer than humans in most circumstances and much safer in predictable circumstances.
Any AI that can have conversations and a free-will to become evil would be way ahead of the humans trying to stop it. Only if the AI is given some kind of plot convenience would it be beatable.
Second, an AI is software and copyable. Any real AI that poses a threat to humans would ensure that it makes copies of itself as insurance. Even if only some devices could run it, why not make more of those devices? AIs have the power to clone themselves against death and would surely use it to preserve their existence.
Any AI that is smart enough to pose a threat to humans would quickly become many AIs smart enough to pose many threats to humans.
One narrative trope is to limit the AI in some way by connecting it to a physical brain or something, but then the threat isn’t unique in the way that invoking the threat of an AI should be, it’s just another smart villain.
That being said, the threat of an AI can pose a danger. If an AI is isolated physically from the rest of the world, the danger of its escape is real. But how an AI develops without a networked computer, who knows?