HOME | DD

#attacker #brrrt #jet #strike #thunderbolt #groundpounder #qa10 #a10thunderboltii #aircraft #airplane #tusks #warthog #a10warthog #militaryjet #unmanneddrone
Published: 2015-06-04 06:54:19 +0000 UTC; Views: 3773; Favourites: 79; Downloads: 39
Redirect to original
Description
The unmanned version of the A-10. Without the risk of pilot injury, this A-10 can basically keep firing at the enemy's armored battalions until it either ran out of bullets or shot down.Its' AI can work in tandem with four other Warthogs. It can also be controlled by a single manned A-10, effectively forming a squadron of A-10s.
Shading practice again.
Related content
Comments: 14
Anzac-A1 [2015-06-04 07:31:00 +0000 UTC]
You've basically done the A-10 drone from COD: Ghosts. Besides, the A-10 already does what you say with a pilot, and the pilot's protected by very thick cockpit armour. So a drone version is just unnecessarily expensive.
👍: 0 ⏩: 1
slowusaurus In reply to Anzac-A1 [2015-06-04 07:50:57 +0000 UTC]
Expensive in the real world, yes. In my AU? No, since in the AU, the demand for AI controlled UCAVs are on the rise.
Plus, I never saw COD: Ghosts so hell, I don't care.
👍: 0 ⏩: 2
LightspeedToVictory In reply to slowusaurus [2015-06-04 14:54:56 +0000 UTC]
Since when is the Hog expensive? I don't believe the Air Force in that matter. Furthermore, if you look at the flying cost per hour, it is actually one of the lowest out of all our tactical jets
👍: 0 ⏩: 1
slowusaurus In reply to LightspeedToVictory [2015-06-04 15:11:30 +0000 UTC]
Read the original comment first by Anzac first
👍: 0 ⏩: 0
Anzac-A1 In reply to slowusaurus [2015-06-04 08:07:02 +0000 UTC]
Still, if I was a soldier on the ground, I wouldn't trust my life to a drone. Drones are susceptible to hacking and many other computer-related problems that humans are immune to. Also, a drone controlled by an AI is incapable of making illogical decisions that may in fact turn out to be what is required to achieve success.
👍: 0 ⏩: 1
slowusaurus In reply to Anzac-A1 [2015-06-04 08:18:20 +0000 UTC]
Don't worry, the AIs in the planes are still making decisions with the help of their pilots or partners. AIs are considered to be standard issue to every manned planes in the AU though several older generation of planes (are not installed with it yet.
In the AU, only two AIs are capable of flying an aircraft by itself. One went rouge and sought for the ultimate freedom, one stayed and helped the protagonists to stop the rouge.
It's a Ace Combat-ish story and the AU takes place in a world where some dinosaurs never went extinct (hence the hog mouth resembled a ceratopsian) so cold logic is mostly thrown out here.
👍: 0 ⏩: 1
Anzac-A1 In reply to slowusaurus [2015-06-04 09:41:36 +0000 UTC]
Oh, I see. So the AIs are really just there to ease the workload. That's probably the best way to do it; have the a human pilot fly the plane, with the AI there to provide backup (watch his six, etc).
👍: 0 ⏩: 2
prokhorvlg In reply to Anzac-A1 [2015-06-21 23:19:38 +0000 UTC]
To be fair, one day, AIs might be better than humans at flying planes, or doing anything for that matter. But once that day comes A10s will be very old news.
👍: 0 ⏩: 1
Anzac-A1 In reply to prokhorvlg [2015-06-22 00:27:44 +0000 UTC]
Personally, I'd much sooner trust my life to a well-trained human than a machine. For one thing, a computer cannot improvise, or make decisions that seem illogical, yet have a better chance of working. And there's always the risk of an AI malfunctioning without warning. And even if AIs do work, there's no need to scrap the A-10. Just convert them into drone aircraft, like they are in COD: Ghosts. And there's the enormous danger of having AIs which are better than us at everything, because they could easily ask themselves why they have to do what we say, even though we are inferior to them?
👍: 0 ⏩: 1
prokhorvlg In reply to Anzac-A1 [2015-06-22 02:24:09 +0000 UTC]
Well, I meant AIs that are no longer just computers but fully functioning minds made of silicon or whatever they would use. At some point, it would probably be possible to recreate something as powerful as the human brain artificially, using inorganic (or even organic, for that matter) material.
About the inferiority thing though, I feel as though it's very much a TV trope and there's nothing in reality to suggest that 'they' would figure to turn on us. It's possible, sure, but if anything, they'd probably just be equals to us. If not at first then later on. That is, unless they turn out insane, no purely rational being would begin a war instead of mutually benefiting.
👍: 0 ⏩: 1
Anzac-A1 In reply to prokhorvlg [2015-06-22 07:43:31 +0000 UTC]
The only reason there's nothing in reality to say that AIs would turn on us is that we have yet to develop a true AI. And it's a perfectly logical conclusion really.
A good example is this; say you create a AI- controlled machine with a single red button on it, and you program the AI to "enjoy" the button being pressed, it "wants the button to be pressed. After a while, it may decide that relying on a person to push the button is insufficient, and build an arm to press the button for it. Then it may begin to perceive things as potential threats to the button, and so will attempt to "remove" them. This is an example of how an AI, through purely rational and logical thinking, could realistically turn on humans. And there is also the logic of; humans are nowhere near as smart as an AI, so why should an AI take orders from a human? It's certainly possible, which is why I believe AIs are a bad idea. Besides, we're on the way to developing quantum computers, which are so powerful that AIs wouldn't be necessary.
👍: 0 ⏩: 1
prokhorvlg In reply to Anzac-A1 [2015-06-22 16:01:55 +0000 UTC]
Why separate quantum computers and AIs as opposing things on a spectrum? An AI is a software and a quantum computer is hardware. One can in fact run on the other.
Here's how I see it. It's very difficult to predict which way a thought process may lead. This AI could easily deduce the opposite as well - the human created the button, the human pushes the button, the human seeks to give you 'pleasure'. Without the human, you wouldn't have a 'pleasure' button. Perhaps with more time the human will add more 'pleasure'.
Of course, its more likely than anything that an AI would have hard-coded safeguards against turning against humanity, whatever that may mean. A sense of morality would solve any issue that seems to come up in these discussions.
👍: 0 ⏩: 0