Page 1 of 2 [ 23 posts ]  Go to page 1, 2  Next

arjay
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 31 Dec 2013
Age: 35
Gender: Male
Posts: 36

23 Aug 2014, 7:01 am

Hi everyone,

Just want everyone to share their opinions on the current implications of current computer science researches today. I mean I have a strong feeling that the next generation of information technology is coming. More demands for highly intelligent data analysis will eventually come and be available for everyone, in all businesses and even in our cellphone apps. It may look like AI algorithms and neural networks, GAs, deep learning etc. will be a practical knowledge for every programmer. 5GLs (5th generation languages) will eventually be as common as Java and C++ now. Even computer hardware's will begin development designed for self learning systems rather than rule based instructions.

What do you think??? Could it be an exciting period on computers and programming??



TallyMan
Veteran
Veteran

User avatar

Joined: 30 Mar 2008
Gender: Male
Posts: 40,061

23 Aug 2014, 7:45 am

arjay wrote:
What do you think??? Could it be an exciting period on computers and programming??


Yes, it could be. The thing that intrigues me the most is what will happen when we reach the technological singularity, which will likely be sometime within the next fifty years.

http://en.wikipedia.org/wiki/Technological_singularity

When artificial intelligence surpasses that of humans and the A.I.'s start to develop even more intelligent computers / robots etc. It is theorised that there will be an exponential explosion in machine intelligence with totally unknown consequences; benign or disastrous for humanity. I suspect that by this stage, human programming languages will be obsolete as will be human programmers... and possibly the human race!


_________________
I've left WP indefinitely.


0_equals_true
Veteran
Veteran

User avatar

Joined: 5 Apr 2007
Age: 41
Gender: Male
Posts: 11,038
Location: London

23 Aug 2014, 8:26 am

To me AI is a tool. I'm getting a bit tired of these doctorates, simultaneously talking about AI supremacy whilst willing it forward. To me this smacks of padding out/hyping their research over others. I remember one example, where the guy believe that capacity alone (a specific type) would somehow spontaneously evolve into AI.

Only a species like us would be simultaneously so smart and so stupid to allow this to happen.

AI need to be confided to a context, it is not meant to replace natural intelligence but supplement it. It need to be practically done.

Although there is some context where human or animal like AI project are viable, I think this is overdone, research should be focused the abstract and the practical aspect of AI rather than unabated biological replication.



TallyMan
Veteran
Veteran

User avatar

Joined: 30 Mar 2008
Gender: Male
Posts: 40,061

23 Aug 2014, 8:58 am

0_equals_true wrote:
Only a species like us would be simultaneously so smart and so stupid to allow this to happen.


I agree. Regarding AI we are truly like a moth drawn towards a candle flame. Human nature will inevitably create the AI that creates the technological singularity and there will inevitably be huge unforeseen consequences. Part of the problem is that AI is such an attractive prospect for so many positive reasons provided it is developed to help us and is constrained to those purposes. However, AI is so open ended that it won't stop there until it is too late to turn back. When we finally open Pandora's box we won't be able to close it again.

Edit: Coincidentally, I just stumbled on this article in today's news:
http://www.independent.co.uk/life-style/gadgets-and-tech/robots-must-learn-to-value-humans-or-they-could-kill-us-out-of-kindness-9687378.html


_________________
I've left WP indefinitely.


arjay
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 31 Dec 2013
Age: 35
Gender: Male
Posts: 36

23 Aug 2014, 11:51 am

I think the motivating factor for humans to starting introducing AI into machine systems is that anything that is complex but is predictable is a candidate for automation, and thus the standard technique to do this is through contemporary programming, where the Turing machine is realized, or rule-based. However, as humans become 'smarter' (hopefully wiser), we wanted to pass the burden of complex and lengthy programming to machines themselves and hence the motivation of AI or hypercomputing.

The case is that the means of ensuring that constraining the AI programs wont have unpredictable results, a rudimentary example is what happened in I robot movie, but of course, in reality it's more complicated than that. Maybe a part of the BRAIN initiative from the US is to understand what consciousness is, and at the same time, set the unknown, or unpredictable parts of our thinking into predictable concepts, which will make artificial neural networks properly constrained. I am not against the development of such advanced systems, provided that it is carefully tested that it's still under our control.

On the contrary, in my opinion, is that the operational hardware of today's systems make use of processor based, then GPU based, then an emulation of neural network hardware based on the neurological structure of the brain, see http://researchweb.watson.ibm.com/cogni ... QPvHxNFq-d. Small scale AIs are achievable, with limited synaptic capabilities. With the current electronics capability we have, I think it still be unfeasible to develop hardware that would match the human intelligence. Much research needs to be done for neuromorphic hardware using the right materials (like memristors) to fully match, and possibly surpass the human brain.



RetroGamer87
Veteran
Veteran

User avatar

Joined: 30 Jul 2013
Age: 36
Gender: Male
Posts: 10,932
Location: Adelaide, Australia

23 Aug 2014, 12:08 pm

Surely a sentient machine would be useless. Isn't the point of making machines do tedious work that they won't get bored? Why create a machine that's capable of becoming bored? That would put us back to square one. What happens when the robots become unionized?


_________________
The days are long, but the years are short


Tiranasta
Toucan
Toucan

User avatar

Joined: 30 Jun 2008
Age: 32
Gender: Male
Posts: 278

24 Aug 2014, 4:33 am

RetroGamer87 wrote:
Surely a sentient machine would be useless. Isn't the point of making machines do tedious work that they won't get bored? Why create a machine that's capable of becoming bored? That would put us back to square one. What happens when the robots become unionized?

Sentience doesn't necessarily imply 'capable of boredom', but even so there are countless potential problems with giving emotions to such machines. That's why the golden ideal should be 'sapient and self-aware but non-sentient'.



0_equals_true
Veteran
Veteran

User avatar

Joined: 5 Apr 2007
Age: 41
Gender: Male
Posts: 11,038
Location: London

24 Aug 2014, 5:47 am

TallyMan wrote:


Although some thinkers make some good points, I tend to loathe the whole notion of professional "futurists".

A lot of what they talk about is speculative piffle.



RetroGamer87
Veteran
Veteran

User avatar

Joined: 30 Jul 2013
Age: 36
Gender: Male
Posts: 10,932
Location: Adelaide, Australia

24 Aug 2014, 6:03 am

I should get a job like that. I can predict what I want and no one will fault me if I'm wrong because it's impossible to predict the future anyway.

Sometimes predicting the future makes for self fulfilling prophecies.


_________________
The days are long, but the years are short


Tiranasta
Toucan
Toucan

User avatar

Joined: 30 Jun 2008
Age: 32
Gender: Male
Posts: 278

24 Aug 2014, 7:25 am

RetroGamer87 wrote:
Sometimes predicting the future makes for self fulfilling prophecies.

This is a feature, not a bug. This is the single most useful function of futurism.



TallyMan
Veteran
Veteran

User avatar

Joined: 30 Mar 2008
Gender: Male
Posts: 40,061

24 Aug 2014, 7:52 am

Tiranasta wrote:
RetroGamer87 wrote:
Sometimes predicting the future makes for self fulfilling prophecies.

This is a feature, not a bug. This is the single most useful function of futurism.


I agree, it is where dreams and ideas are formed and then technology races to make it a reality - especially if there is money to be made from it.


_________________
I've left WP indefinitely.


Coolguy
Blue Jay
Blue Jay

User avatar

Joined: 28 Jun 2014
Age: 37
Gender: Male
Posts: 95

26 Aug 2014, 12:17 pm

For good information on this topic I recommend the following blogs:

http://rebelscience.blogspot.com/

http://entersingularity.wordpress.com/

I can't speculate on the future of AI any better than these guys can.



slave
Veteran
Veteran

User avatar

Joined: 28 Feb 2012
Age: 111
Gender: Male
Posts: 4,420
Location: Dystopia Planetia

26 Aug 2014, 3:38 pm

:D

The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines by Hugo de Garis

Hugo-De-Garis/e/B001KJ0ISYhttp://www.amazon.com/

been reading this kind of stuff for years...lol

look him up on YT if you want a good scare.



Here
Deinonychus
Deinonychus

User avatar

Joined: 17 Jun 2012
Age: 61
Gender: Male
Posts: 379
Location: California

07 Sep 2014, 5:36 pm

TallyMan wrote:
arjay wrote:
What do you think??? Could it be an exciting period on computers and programming??


Yes, it could be. The thing that intrigues me the most is what will happen when we reach the technological singularity, which will likely be sometime within the next fifty years.

http://en.wikipedia.org/wiki/Technological_singularity

When artificial intelligence surpasses that of humans and the A.I.'s start to develop even more intelligent computers / robots etc. It is theorised that there will be an exponential explosion in machine intelligence with totally unknown consequences; benign or disastrous for humanity. I suspect that by this stage, human programming languages will be obsolete as will be human programmers... and possibly the human race!


The interest in 'quantum computing' has been growing lately.



arjay
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 31 Dec 2013
Age: 35
Gender: Male
Posts: 36

24 Sep 2014, 6:07 pm

One way to solve the issue to singularity (and our doom) is to prioritize humans to catch up first. We should upgrade our intelligence first before building super intelligent machines. One way is for a direct connection of an artificial neural network hardware on a human brain and thus enhancing the human's overall thinking capacity through brain-computer interfaces, which is under active research nowadays.

The bottomline is that we must focus our efforts first to 'directly' upgrade our own intelligence by augmenting computers with our brains, then after that then its safe to produce more intelligent independent AI systems.

We should think and improve of ourselves first before passing down our most distinguished feature (intelligence) of humans to machines.



slave
Veteran
Veteran

User avatar

Joined: 28 Feb 2012
Age: 111
Gender: Male
Posts: 4,420
Location: Dystopia Planetia

24 Sep 2014, 8:40 pm

arjay wrote:
One way to solve the issue to singularity (and our doom) is to prioritize humans to catch up first. We should upgrade our intelligence first before building super intelligent machines. One way is for a direct connection of an artificial neural network hardware on a human brain and thus enhancing the human's overall thinking capacity through brain-computer interfaces, which is under active research nowadays.

The bottomline is that we must focus our efforts first to 'directly' upgrade our own intelligence by augmenting computers with our brains, then after that then its safe to produce more intelligent independent AI systems.

We should think and improve of ourselves first before passing down our most distinguished feature (intelligence) of humans to machines.


i concur

However, this prudent measure will not be taken.

The funding flows to that which is most desired by the large corporations and gov'ts, and they want AI.