As far as Skynet goes, it's a bit of a mixed bag. Any AI expert will tell you that Skynet isn't really a concern, because AI isn't malicious. It isn't actual intelligence. But there is a caveat. Like any computer program, it responds to input and parameters. If those parameters aren't set properly then the outcome is unknown, and could be bad.
I was listening to a lecture on AI a few months back where the professor pointed out a good example:
Say you have a robot nanny that takes care of the kids while the parents are at work. One day both parents are stuck at work all night. The children are hungry and there's no food in the fridge. All the robot knows is that children need food. The robot spots the pet cat and realises that it will be a fine source of protein.
Basically put, Skynet will happen only if we give AI free reign and we tell it to protect the planet, but don't protect humans. And we're never going to do that.