Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017 Investor Roundtable:General Discussion

This site may earn commission on affiliate links.
Status
Not open for further replies.
We are at a critical time in history. If leave it unchecked, AI will quickly surpass human in terms of brain power, and right after that point, AI will become thousands millions times more powerful than human brain, this vertical phase may only take weeks or months. At that time AI will work on AI, we become ants.

SoftBank CEO Son thinks robots will never surpass human in imagination. That is totally wrong. There is nothing so special that AI can not do regarding imagination. Wait after their IQ reaches 800.

Elon fully understand the risk of AI.

I work in a highly related field (though far from claiming expertise - only greater than complete ignorance) - I haven't yet seen evidence of a computer program / AI that can solve the problem of "what is the problem that needs to be solved"? Or related - "what is the opportunity that should be taken advantage of"?

When there is a defined objective or winning condition (chess, go, sabermetrics), we've seen computer programs of various kinds that can do a better job of solving that problem than humans can. But I still haven't yet seen even signs of life of a program that could successfully pick chess as a problem to solve, much less define what "winning" or "success" in chess would look like, in order to then go off and solve chess better than humans.

In the case of autonomous driving, we're seeing evidence of computer programs that can drive cars as well as or better than humans. However I still haven't yet seen evidence of a computer program that can pick "autonomous driving" as a problem to be solved, much less define the objective or winning condition for autonomous driving to solve for, so that the program can be written by the program in order to drive the car as well as or better than a human.


I'm not saying that it's not possible for a computer program / AI to reach the point where it's the program that is identifying the problem / opportunity to be solved, defining success in solving that problem / opportunity, so that the AI can then go solve the problem. Only that I have never seen or heard of signs of life anywhere, of an AI / computer program being able to do so.

I also have no personal evidence, signs of life, or even a functional mental model of how that would work or what it would look like.


I'm also not holding my breath waiting for the day when a computer / program or AI is able to identify the problem to be solved. I make use on a daily basis of AI and related techniques to help me sift through big piles of data in order to inform and improve the decision making I'm involved in, and I expect AI and related techniques to continue helping humans make a wider and wider variety of such decisions.


Interesting to me is that in the chess and sabermetrics examples (Nate Silver talks about these, among others, in his book The Signal and the Noise), is that on the other side of the AI / computer program getting good enough to beat humans, the next evolution in the relationship and best solution to the problem evolves to be neither human nor AI / computer program on their own. Instead it's some sort of combination / hybrid of the two forms of input.

In the case of Go, my guess based on the prior art is that over the next few years, we'll see an evolution within go to where the very best Go "player" will be some sort of team made up of a mix of human and AI / computer program. I could, of course, be wrong about that.

Either way, as complex as Go is, it's trivial next to autonomous driving, which is itself trivial next to the sorts of problems AI's will need to start solving in order for AI to have "imagination" or to have anything like brain power. The central problem itself being "what is the problem / challenge / opportunity that needs to be solved", and then the followup question "what constitutes success / what do we solve for"?

I'm not worried about AI solving that problem, based on any work I'm aware of going on.
 
The M3 is not ramping AT ALL and the X has some SERIOUS quality issues still. The S is the saving grace for Tesla and the one (and only) thing they've gotten right so far ("gotten right" is severely undermining how great an achievement the S was/is).

Words like "will", "coming", and "about to" are about the future. And for Tesla, it's been about the future for a long time. I'm afraid it's starting to be about the "now". Investors simply aren't going to keep investing on potential and promises. We have 2 months left in 2017 and by all accounts, Tesla has delivered less than 500 M3s.

That's what's "not to like".

^ I agree. People can criticize this post all they want if it makes them feel better, but as someone who believes in Tesla's future and had been heavily invested in them for 3+yrs, the reasons above are why I finally pulled the plug....for now. There is just an overwhelming amount negativity directed towards Tesla at the moment and much of it is justifiable. They truly are missing their "Iphone" moment with the M3 delays. From early reports the M3 looks like an absolute winner, but at the moment it is still essentially vaporware. In my opinion it doesn't matter how many cool announcements Elon makes right now, I don't see the stock price going up until they start delivering the M3 to customers at a decent rate.

This brings me back to something I've warned time and time again.

I strongly, STRONGLY believe that most retail investors should not hold any shares in TSLA. The vast majority of people have neither the patience nor the psychological will to hold the stock for the length of time necessary for a potentially big payoff. You have to be some kind of crazy or have some psychopathic tendencies to withstand the instability, FUD, and other drama (including this forum).

We should be honest about TSLA: it is a bet on a futuristic vision. It is a volatile stock that often moves up or down significantly in the short term for no logical reason. Most people, both Bull and Bear, trying to time this stock in the short term are going to get run over by the Big Banks and will suffer pain and humiliation.

People who are clearly uncomfortable with this level of risk should sell their shares and buy something more stable. Index Funds, or even an industry ETF like BOTZ (AI and robotics focused fund) are a more appropriate investment for most people.

Bottom line: Limit your investment to an amount such that you can sleep at night. If roadbumps in Tesla keeps you awake at 2AM or otherwise concerns you greatly, this is not an investment that is appropriate for your risk tolerance. People may criticize Sammyzuko for selling, but that may be a wise decision given their specific situation and personal risk tolerance.
 
I work in a highly related field (though far from claiming expertise - only greater than complete ignorance) - I haven't yet seen evidence of a computer program / AI that can solve the problem of "what is the problem that needs to be solved"? Or related - "what is the opportunity that should be taken advantage of"?

When there is a defined objective or winning condition (chess, go, sabermetrics), we've seen computer programs of various kinds that can do a better job of solving that problem than humans can. But I still haven't yet seen even signs of life of a program that could successfully pick chess as a problem to solve, much less define what "winning" or "success" in chess would look like, in order to then go off and solve chess better than humans.

In the case of autonomous driving, we're seeing evidence of computer programs that can drive cars as well as or better than humans. However I still haven't yet seen evidence of a computer program that can pick "autonomous driving" as a problem to be solved, much less define the objective or winning condition for autonomous driving to solve for, so that the program can be written by the program in order to drive the car as well as or better than a human.


I'm not saying that it's not possible for a computer program / AI to reach the point where it's the program that is identifying the problem / opportunity to be solved, defining success in solving that problem / opportunity, so that the AI can then go solve the problem. Only that I have never seen or heard of signs of life anywhere, of an AI / computer program being able to do so.

I also have no personal evidence, signs of life, or even a functional mental model of how that would work or what it would look like.


I'm also not holding my breath waiting for the day when a computer / program or AI is able to identify the problem to be solved. I make use on a daily basis of AI and related techniques to help me sift through big piles of data in order to inform and improve the decision making I'm involved in, and I expect AI and related techniques to continue helping humans make a wider and wider variety of such decisions.


Interesting to me is that in the chess and sabermetrics examples (Nate Silver talks about these, among others, in his book The Signal and the Noise), is that on the other side of the AI / computer program getting good enough to beat humans, the next evolution in the relationship and best solution to the problem evolves to be neither human nor AI / computer program on their own. Instead it's some sort of combination / hybrid of the two forms of input.

In the case of Go, my guess based on the prior art is that over the next few years, we'll see an evolution within go to where the very best Go "player" will be some sort of team made up of a mix of human and AI / computer program. I could, of course, be wrong about that.

Either way, as complex as Go is, it's trivial next to autonomous driving, which is itself trivial next to the sorts of problems AI's will need to start solving in order for AI to have "imagination" or to have anything like brain power. The central problem itself being "what is the problem / challenge / opportunity that needs to be solved", and then the followup question "what constitutes success / what do we solve for"?

I'm not worried about AI solving that problem, based on any work I'm aware of going on.

For many years I've thought myself incapable of an original thought. Though I have a pretty good memory I'm incapable of saying anything new. All I could contribute is merely derivative. Thus I have the limitation you say infects machines. You've articulated here something which really is new at least to me.

But how many times do we ever set for ourselves the task of achieving the test of "The central problem itself being 'what is the problem / challenge / opportunity that needs to be solved', and then the followup question 'what constitutes success / what do we solve for'?" We think like this only when a problem presents itself which almost by definition confines our thinking into a narrow practical search for solutions to "that" problem. The ecological concern which Musk has chosen for Tesla is pretty clear based on the evidence of our poor solutions to the problem of energy and increasing concerns for the environment and the preservation of the human race.

Of course Musk is a great example of a creative person doing good. But I'm not at all certain humans do what you suggest unless the problem is half solved by becoming so obviously a problem to begin with.

What if we sat down without a care in the world and thought about a problem that had to be solved? Let me pick an example, "how can we ensure that good will always prevail over evil without doing evil in the process." All I can do is, again, derivative. It is said that during his presidency Abraham Lincoln read only Shakespeare and the Christian Bible.
 
It is human nature to focus on the negative, but I am still surprised at how little attention has been given in the automotive and other press to Tesla's uncorking of most 75Ds to take about 1 second off 0-60 times (5.2-->4.2 sec for S and 6.0-->4.9 sec for X). This is really a tremendous performance boost provided for no charge to customers who had no reason to expect it.

I took a peek at the BMW and Mercedes websites and while there are no exact apples-to-apples comparisons it looks like if you bought the additional power in a new car it would cost somewhere in the range of $5000-$10000 for the BMW 5 and 7 series and S class. Tesla has provided the upgrade to cars that were reportedly built as early as April 2016 -- so it's like Christmas in October for a large number of owners, who seem really thrilled with the noticeable boost in performance if the posts on TMC are any indication.

The 75D appears to be the most popular model of S and X (for example, the majority of Model S's in the recent quarter's spreadsheet are 75Ds) so this seems like a great way to build goodwill with a large number of Tesla customers. The cost should be low because it appears there was capacity at the service centers (possibly due to the Model 3 rollout being delayed a bit) and presumably Tesla had plenty of time to validate that the uncorking would not cause too many additional warranty claims.

Awesome move IMO -- very impressive.
 
No-one with a brain on this forum takes the first post from a new account with any seriousness.

Go back and look at Q4 of 2015. The entire quarter. For stockholders, it was a mind-numbingly-stressful quarter of waiting for Model X's to get delivered, to get quality problems ironed out, and so on. It has all been taken care of. Right around two years later, we are seeing the same thing with the Model 3 - except in this case, the ramp-up is definitely occurring more quickly.

Whatever an "iPhone moment" is - who cares if Tesla misses that. They produce and deliver more and more cars every quarter. The company is growing. No-one has anything that touches the Model 3 - or their entire range of cars, for that matter.
fwiw... the Model X was a side show and the Model 3 is the main event... they can not ef this up the way they did the Model X... they will blow billions... on top of what they already blow.

"the ramp-up is definitely occurring more quickly" -- it is?... they haven't even delivered a vehicle to a customer yet. the employees are not customers... they've had massive component replacements on them... you don't do that to customers... but you can do that to employees.

they are still beta testing this thing.
 
A reminder to everyone who is being disappointed by the Model 3 ramp so far. A little more than a year ago, they decided to accelerate the ramp plan by two years. We're not late at all if everything was going with the original plan. Sure they look missing the target now but making this change boosted the stock over 300 and got them an unbelievable bond issue just a few months ago. This "over-promising" strategy worked out perfectly from their point of view. And this was not the first time they used this kind of strategy to do this and I doubt this would be the last.

The alternative (500k in 2020) is not likely get TSLA over 300 at this time. Think about this for a moment.

For anyone who's been closely following the company and stock for more than a year, this is like TSLA 101.
that's called fraud... you are celebrating fraud.
 
True also for ‘Real’ Intelligence

A tongue-in-cheek response that I also happen to agree with :)

In a serious vein, I consider at least 50% of the work involved in "doing data science" to be this problem. Namely, what is the problem / opportunity / challenge that we want/need to work on, what does success look like, and what data do we have available that addresses that problem.

I shorten all of that down to "frame business problem as analytics problem". I've worked on projects for weeks or months before we knew what the problem was that we were trying to solve. And invariably, once this becomes crisp, it also seems like it becomes vanishingly small and specific.

As difficult as model building and validation is, I consider the "frame business problem as analytics problem" to be far and away the hardest problem of all to solve consistently. Because it's not strictly math, and because it's also not strictly whatever you think/decide it might be - it's a mix of imagination that gets connected back to specific data and analysis techniques in which you can imagine a path to success (even if the eventual path to success looks completely different what was originally imagined).


Here's an example, as somebody that uses some of the techniques, but doesn't actually work in autonomous driving. Given "autonomous driving", what is the specific problem that we need our program / AI to solve for?

For me at least, the first articulation of that problem is something like "keep it in your lane, don't hit the person in front of you". This actually results in two problems that need to be solved - sense the vehicle / thing in front of your vehicle and don't hit them, while simultaneously steering right and left in such a fashion that you don't leave your lane.

Of course, this simple first pass won't get you to autonomous driving, but it DOES get us to something that is immediately useful today. Many of us use it on a regular basis in our Teslas.


So what else do we need our autonomous driving program to do?

Well, it'd be nice if it knew where we were going (navigation destination has been chosen), if it were able to signal and change lanes on the freeways into exit lanes, exit one freeway, and then merge onto the next freeway. That introduces a whole host of additional problems to solve, while still being a pretty well constrained problem to solve and still being far short of "autonomous driving". Encompassed in this will be logic / AI for changing into an adjoining lane without hitting somebody and without cutting somebody off, and then changing lanes again for merging onto a highway.

There's also a another bit of more strategic logic that is monitoring your vehicle's progress along a route specified in nav, and making decisions about the need to change lanes, exit one highway, merge onto a new highway. And at some point, signal to the driver that the portion of the route the car is ready/willing to navigate on it's own is ending and the driver needs to be ready to take over.

This functionality is something Tesla has talked about releasing, and we're STILL nowhere close to "autonomous driving".


Upshot, at least for me - "autonomous driving" isn't a single problem. It's dozens (and maybe more like hundreds) of intertwined problems that all need to be solved. And remember, as difficultl as all of these problems are indivdually and collectively to solve, they are still trivial next to "what is the problem/challenge/opportunity for us to address".

We data scientists may find ourselves automated out of a model building job in the future (plenty of technology showing up to automate / simplify the model building process). I STILL haven't seen something that will automate, or even make a guess for us, at what problems are worth solving, need solving, and can be solved given the data available or that can be acquired.
 
...

Of course Musk is a great example of a creative person doing good. But I'm not at all certain humans do what you suggest unless the problem is half solved by becoming so obviously a problem to begin with.
....

This is part of what makes "framing business problems as analysis problems" so difficult. It really happens at many different levels of detail, with some of them being more obvious and straightforward, and some of them being so obscure and difficult that one of the problems is even getting other people to agree that the problem you see, is a problem that needs to be solved.

Think tactical problems vs. strategic problems (simplified of course).


I don't know what it would mean to start with a completely blank slate and then pick one or more problems to solve. To some extent that's what Elon's done, but I would say the slate wasn't completely blank. Only that give a big universe of big problems, he's been able to proactively identify a few big problems that manifest in a large or small number of different ways, and then start coming up with solutions to those problems.


Another way of thinking about this might be our human ability to be proactive. To choose something that we think needs to be worked on, and then work on it. The first part of "then work on it" is to figure out HOW to work on it - do we build a company, run a car wash, make a donation, play a game, ...

I'm at the edge of what I can contribute - this helps me better articulate why I'm not particularly worried about AI taking over.
 
This brings me back to something I've warned time and time again.
The vast majority of people have neither the patience nor the psychological will to hold the stock for the length of time necessary for a potentially big payoff. You have to be some kind of crazy or have some psychopathic tendencies to withstand the instability, FUD, and other drama (including this forum).

o_O

At least you didn't call us fat.
 
A reminder to everyone who is being disappointed by the Model 3 ramp so far. A little more than a year ago, they decided to accelerate the ramp plan by two years. We're not late at all if everything was going with the original plan. Sure they look missing the target now but making this change boosted the stock over 300 and got them an unbelievable bond issue just a few months ago. This "over-promising" strategy worked out perfectly from their point of view. And this was not the first time they used this kind of strategy to do this and I doubt this would be the last.

The alternative (500k in 2020) is not likely get TSLA over 300 at this time. Think about this for a moment.

For anyone who's been closely following the company and stock for more than a year, this is like TSLA 101.

I'm not sure what kind of evidence you have of intent, but I don't think Tesla intended to commit securities fraud.
 
In a serious vein, I consider at least 50% of the work involved in "doing data science" to be this problem. Namely, what is the problem / opportunity / challenge that we want/need to work on, what does success look like, and what data do we have available that addresses that problem.

Upshot, at least for me - "autonomous driving" isn't a single problem. It's dozens (and maybe more like hundreds) of intertwined problems that all need to be solved. And remember, as difficultl as all of these problems are indivdually and collectively to solve, they are still trivial next to "what is the problem/challenge/opportunity for us to address".

Another way of thinking about this might be our human ability to be proactive. To choose something that we think needs to be worked on, and then work on it. The first part of "then work on it" is to figure out HOW to work on it - do we build a company, run a car wash, make a donation, play a game, ...

Yes I agree with your general assertions here. I was fortunate to have worked on a number of these concepts and AI issues in early stages and would love to relate more of that to a discussion at some point, but it’s way too OT- even for Prof Mod.
For now though- the differentiator you’re describing is contained in human Analogue vs AI digital domain. You’re not going to like this, but the human advantage is actually NOT in identifying (and solving) a problem.
Instead, It’s the counter-intuitive human ability to enjoy ‘not giving a f*ck’

great discussion, thanks adiggs -
I’ll make a note to future post some thoughts in Long Term thread when the Mods are stoned.
Come to think of it, that might be the best time for me too!:p
Thanks again
 
The Whitefish no-bid Puerto Rico contract stinks
"<
Meet Whitefish Energy, which has just been awarded a $300 million project to rebuild storm-smacked Puerto Rico's electrical grid. Whitefish is based in the hometown of Secretary of the Interior Ryan Zinke, who knows the firm's chief executive and whose son once worked for Whitefish in a modest capacity. Whitefish Energy has two full-time employees, and its largest government contract prior to this was a $1.3 million job fixing 4.8 miles of power line. Its biggest government job other than that was replacing a pole. Whitefish Energy is a two-year-old firm, and it reported $1 million in revenue on its procurement documents.
>"
Ian Masters from Pacifica Network discusses this on tonight’s show. Great show, you can hear it online.
 
One poor thing about generating hype and sharing unrealistic promises is a loss to common guy who bought into the brand Elon and has no idea what’s happening with the company. Not every retail buyer follows the company as closely as most in here. These people see an interesting Elon tweet or a mention of BFR and think he can do everything. Sure he can. But they may not know Elon timeline or how volatile Tesla is. These are the people who lose during the times like this. Feel sorry for them.
 
The Whitefish no-bid Puerto Rico contract stinks
"<
Meet Whitefish Energy, which has just been awarded a $300 million project to rebuild storm-smacked Puerto Rico's electrical grid. Whitefish is based in the hometown of Secretary of the Interior Ryan Zinke, who knows the firm's chief executive and whose son once worked for Whitefish in a modest capacity. Whitefish Energy has two full-time employees, and its largest government contract prior to this was a $1.3 million job fixing 4.8 miles of power line. Its biggest government job other than that was replacing a pole. Whitefish Energy is a two-year-old firm, and it reported $1 million in revenue on its procurement documents.
>"

This, the ugly face of blatant corruption in PR, is what scares me. Elon should tread carefully on any projects in this dangerous place.

The current powerbrokers in PR do not want statehood as the status quo would shield them from any scrutiny from people/media in mainland USA. For all practical purposes, PR is a 3rd world colony of USA.
 
  • Disagree
  • Like
Reactions: neroden and Lessmog
Status
Not open for further replies.