Professor Kilnam Chon: Brilliant man, Terrible speaker.
Unfortunately the first speaker to open this conference, a pioneer in the early days of Korean internet, did more to annoy me for his butchery of the English language than inspire me with his insight into the future of internet connectivity. I'm not entirely sure why he chose to give his presentation in English as translation services were available. Perhaps he felt more comfortable in English, and did not have the option of speaking in his native Japanese (the only translation services available were English and Korean). Anyway, I found his word choice, comparisons, and comments to have an obnoxious sort of inaccuracy that came across as Korean nationalistic zeal rather than insightful analysis or forecasting.
Isn't that odd? As I was operating on little more than my morning coffee, and a bit of adrenalin from a harrowing commute, I have little more than my mental notes. But an example of this was his assertion that the internet was "invented again" in Korea in 1982 after ARPANET in 1969, as if we're to believe that there were no other developments by no other countries in the 13 years in between. Or that the "invention" of the internet in Korea wasn't essentially a similar design to what existed in UCLA. Not that it wasn't a great achievement, but Kilnam himself was at UCLA at that time, and he was later instrumental in implementing the Korean adaptation. Funny how he didn't mention his role in it, but either way that certainly doesn't sound like an invention to me. Still, semantics aside he had the seemingly insecure habit of constantly comparing the internet situation in Korea with other countries. To this end it was also quite shortsighted, in that it was impossible to ignore how seemingly that the only countries worth mentioning was the US, Korea, and the generic "developing countries", (or perhaps it was focused solely to illustrate the benefit Korean tech could bring to other countries). Among what could have been a decent presentation he added half a hundred jargon-y buzzwords like "pioneering", "innovative wisdom", "visionary", and "dynamic", until it had the effect of someone talking for a long time and yet actually saying very little.
My last and biggest complaint was the actual layout of his presentation. It began as a history lesson, moved into talking about content, briefly paused to actually make some decent points for further development before machine gunning it all down with the aforementioned meaningless fluff about technological creative inspirations and innovations that... well... generally... don't happen in Korea. Apologies for the long-winded rant there, but it was the least I could do to lambast this guy for wasting the first morning for me with his ill-prepared excuse for a presentation.
Stephen Moffat: Interview with the writer of Dr. Who, and creator of Sherlock.
My girlfriend and I gave each other looks of knowing excitement as the creative genius behind two of the UK's most popular TV shows at home and abroad slid on stage. I got into both Dr. Who and Sherlock relatively recently, but found both captivating enough to watch all there was for each. This interview was fun. Nothing fantastic really, a little humor, but I think seeing Stephen Moffat in person was the main draw here. Interestingly the questions were in Korean and translated via radio to Moffat right on stage. It made for a pretty cool dialogue if you ask me. No real spoilers to speak of with the deductible exception that Benedict Cumberbatch will indeed return for the next season despite Sherlock's incarceration.
Henry Markram: Brain Reading, and the quest to computerize the human brain.
I found this to be probably the most interesting of the talks given that I witnessed at this event. Traditional AI implementations weren't quite what we would like to think of as "Artificial Intelligence" in that if they were given input that was outside of expected parameters, they could neither provide acceptable output, nor were they capable of learning in the manner that could be expected of a human. Into this comes Markram and his implementation of computerized brain simulation. Rather than an AI that attempts to imitate human responses Markram, and his fellows are attempting to imitate the actual architecture of the human brain. This proves to be a much harder task he asserts; requiring massive supercomputer arrays armed with smartphone-style microprocessors that have yet to be built in this way. Despite their need for both advances in hardware AND software for the simulation of a human brain, he showed he has been successful in modeling the brain of a mouse. A mouse who apparently sort of "dances" of its own volition.
The truly amazing part of this is that it would be an imitation of a fully human intelligence. However, normal humans are subject to brain cell death, nonperformance, and miscommunication due to the myriad and ubiquitous environmental effects of living in the natural world. This creation would have none of that; it wouldn't lose capacity from binge drinking with its college buddies. Nor would it suffer from genetic or environmentally influenced chemical imbalances; so it wouldn't have schizophrenic tendencies, or any such similar disorders. Effectively, this AI would be essentially human, with many of the flaws and limitations thereof removed or immunized against. A potential superhuman (even though Markram did not get this far into the probable results of the project, this is what I think it inferred)
Thankfully, Markram didn't unveil that his company is named CyberDyne...
Rob High: IBM's Watson and accessibility of the information marketplace.
This presentation was essentially pointing to what could be a similar technological achievement to what Markram spoke about, but its implementation had a more practical end-goal. While Mackram intends to essentially make a digital human, Rob High told us how IBM's Watson was essentially supposed to be a massive information aggregate. The likes of which Google could only dream of. Essentially the idea here is that it allows for an interface that responds more appropriately to human language. Apparently, it was first used 3 years ago to play Jeopardy, but its development was to far exceed this application and since then it was planned to have applications in medical diagnosis, where due to its aggregate nature could be more effective than any doctor in prescribing treatments to various illnesses.
There were many proposed applications for this technology. To me it kinda sounded like Siri on steroids. High described some of these applications, including simply asking Watson for a recipe for dinner. By telling it the kind of food you were feeling like, Watson would be able to come up with a variety of delicious, but unexpected dishes, presumably as notions of habitual normality or flawed use of heuristics is something Watson would be less susceptible to.
Honestly, I found this talk to drone on for a bit. Not like I blame him, I certainly have similar habits myself to nerd out on occasion. His material was pretty interesting, but I felt the presentation itself could have been polished a bit.
Despite, some of the hiccups, this event was a pretty good spend of time for anyone in, or interested in the industry. I would've like to see more than just the morning sessions, but work calls. Until next time...
No comments:
Post a Comment