Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft refuses to endorse WebGL, labels it ‘harmful’ (winrumors.com)
109 points by ssclafani on June 16, 2011 | hide | past | favorite | 106 comments


Wow, this thread is turning into another security-clueless developer freakout. Microsoft very clearly laid out reasons why they believe that WebGL presents possible security vulnerabilities. While GPU security isn't my area of expertise, the driver issues seem very plausible. If someone wants to actually address Microsoft's points in this thread then that would be great.


Shaders are the big vulnerability I know of.

Any game developer will tell you: It's pretty easy to accidentally craft a shader that will totally stall particular GPUs, taking down the entire windowing environment, if not the entire system. With malicious intent, shaders are a giant gaping DoS attack waiting to happen.

Beyond that, the shader compiler backends are supplied by third parties because they generate hardware specific opcodes and optimizations. Considering how poor nVidia and ATI's drivers have been historically, do you really trust them to create secure compilers?

In theory, Microsoft could implement a subset of WebGL without shaders, but I'd rather it not exist than be crippled and unusable. There are probably several other potential vulnerabilities I don't have firsthand experience with too.

All that said, I think that Java applets & JOGL already expose this attack vector.


Those, like activex controls, require the user to agree to install something that can do whatever to their system. WebGL would be available to any web site.


That should be easy enough to duplicate, just make the feature one of those "Site X wants to use WebGL. [Allow] [Deny]" things. It would make it useless for ads and such, of course, but it seems that most people want to use it for games. Might be workable.


This is not a solution because a user would not know how to answer that question. We don't want ActiveX all over again.

The solution is to address the problems in the article.


I can also support their claims with some preliminary research I've done. When I have free time, I've been poking at the security of shader compilers -- both the translation that's done in Webkit and the compilers in drivers -- and things are pretty bad. Don't have much to show for it yet (a couple non-exploitable crashes/bugs), but I think future research will prove very fruitful.


I don't have a response, but Microsoft supposedly thinks it is securable enough for Silverlight:

"With the release of Silverlight 3 Beta 1 GPU (Graphics Processing Unit) acceleration (or hardware acceleration) is now available."

I'd like to hear what could make that secure that couldn't be used with WebGL.


I don't know anything about how they're doing this, but I wonder if it has something to do with the bytecode abstraction used in D3D. In D3D, you compile HLSL to bytecode which then gets passed down to the kernel, but in the OGL world you pass source straight down to the kernel. Definitely still risks there, but significantly diminished, and some simple verification in userland would make it next to impossible to get a lot of nasty code down into the kernel.


MS could (and probably would) write an Angle style abstraction layer that runs WebGL on top of D3D9 or D3D11 anyway, so I think source going to the kernel isn't a big deal.

It's possible that because MS has more knowledge of the graphics drivers work in Windows they know about some dangerous security holes that Mozilla, Google, or even Nvidia or AMD aren't aware of, but its equally possible that they just don't want to support WebGL for political reasons and this is a semi-technical excuse.


The D3D bytecode IR generated by the HLSL shader compiler in the runtime isn't verified for security concerns before being passed to the driver, so there's no real extra protection there, so ultimately D3D makes it no harder to lock the GPU than OpenGL (ES in the WebGL case) does.


It is a completely separate house / separate bed issue.

You do not have permission to call Direct3D directly. You can't even do cool hacks like you can in WPF, stealing the Direct3D video feed and writing it to a movie file. Everything is abstracted away by (underpowered) APIs.

All Secunia advisories on .NET Framework / Silverlight are presently patched, and the total number is relatively small compared to other technologies like web browsers and Flash. I don't really know enough about WebGL to compare, though.


Silverlight 5 is a better comparison, this blog has the details: http://muizelaar.blogspot.com/


That blog is trolling. The blog author copied and pasted the material from the blog we are currently discussing, and replaced WebGL with Silverlight.

Silverlight is more secure than WPF, too, by the way, and has to be. For example, in WPF there is a very insecure static method that allows you to steal a bitmap of the entire screen! This was one of the first things taken out of Silverlight.

If you want to know more about Silverlight security, ask Nick Kramer who maintains the Silverlight security best practices document for Microsoft.


It's the same thing as Safari, IE, and Chrome using the GPU to composite pages: Silverlight is hardware accelerated, but does not expose the hardware acceleration primitives. GL or Direct3D expose more-or-less direct hardware access, which is very different.


It does in Silverlight 5 through the XNA API.


Ah, awesome!

(I worked on XNA while at Microsoft about a year ago)

I knew that this was happening, but didn't know they released it yet. It's my understanding that they worked crazy hard to make this secure.


Well either Google and Mozilla are knowingly shipping insecure code or Microsoft is, to some degree, wrong. Given the history of the organisations in question I'm going to assume Microsoft wrong or not telling the whole story until we hear a response from Moz or Google.


Microsoft has the perspective of the entire OS from top to bottom. Google and Firefox have the perspective of their respective applications.

Given the concerns Microsoft is voicing (and they aren't the first to voice them) are well below the application itself, I tend to trust Microsoft more on this one.


What about Apple? They're clearly working on WebGL support for Safari. They have at least equal insight into the whole stack, considering the ship the GPU drivers with the OS.

Then there's XNA in Silverlight. If they believe in the security of that, why not build WebGL on top of it? Probably because they're in direct competition with one another, and Microsoft wants Silverlight to win.


Apple has announced that, for now, WebGL on iOS will only be available to WebKit when it's displaying advertisements through iAd (where Apple controls which ads are distributed), and not to web pages generally.

http://www.theregister.co.uk/2011/06/16/webgl_in_ios_5/


I would think that security in this regard is far easier for Apple than it is for Microsoft. By virtue of their closed hardware, Apple has a very limited set of graphics cards for OS X to support. I'd imagine that this makes graphics drivers a lot easier to police and keep secure.


Let's face it, AMD, nVidia and Intel have >99% of the PC GPU market, and 100% of the Mac market. The drivers only change from one chip generation to the next, and the Mac has had chips from all recent generations from all 3 manufacturers. The total number of drivers is therefore identical. I really don't think there's much in it. See also: Silverlight's XNA.


Silverlight's integration with the host browser, DOM and Javascript is pretty good.

I'm pretty sure you could build a WebGL API on top of Silverlight by yourself. It would suck, but it is possible.

That's why I always thought Silverlight is pretty cool as a technology, as compared to Flash. Among other things, a Silverlight plugin could also allow you to serve OGG Theora videos to users.


This is an argument from ignorance and, knowing engineers at both Google and Microsoft, I have no reason to believe that the Google engineers are any more competent than Microsoft. I do know that the Google security team is smaller than Microsoft's.


Hence the 'not telling the whole story' part. Would Microsoft play politics with something like this? Not saying that's the case, only that one would have to be silly to think they wouldn't.


This is a false dichotomy. Mozilla and Google could both be unknowingly introducing the possibility of bugs and errors. The depth of their testing could be inadequate -- bugs could occur only on certain combinations of hardware and driver (and even, versions of drivers).


I believe Google and Mozilla (doesn't Safari support WebGL?) are shipping code that interfaces with insecure GPU drivers.

We cannot, however, discount the incentive Microsoft has in preventing the formation of another standard it can't control. I would consider any info coming from Redmond on this issue to be somewhat exaggerated.

And Windows-specific.


Speaking as someone doing security research on WebGL but no real dog in this fight (aside from developing on WebGL on the side, making me potentially biased in its favor), nothing MS has said is Windows-specific or remotely exaggerated. In fact, they explicitly didn't talk about many potential attack vectors against WebGL, which makes me think they really don't care much about this either way.


I asked Paul Irish, Chome Developer relations: @paul_irish re http://bit.ly/kOZ7Lp - MSFT wrong? Security risk in Chrome/FF with WebGL?

http://twitter.com/#!/paul_irish/status/81492337108328448 @AlexGraul i think chrome's record in pwn2own is a good indicator of our commitment to security while delivering great features. :)


Note he didn't actually answer the question.


I'm not at all an expert to handle details about the security of WebGL. I have no idea on that front.

I do know Chrome and FF just fixed a timing attack vector where you could apparently intuit the content of a crossdomain image by interpreting what hues were based on it via the application of shaders. Which means hypothetically you could read text. Like a crazy-person's OCR.


Or like virtual Van Eck phreaking.

http://en.wikipedia.org/wiki/Van_Eck_phreaking


Has JOGL in Java web applets always had these same vulnerabilities? Is JebGL (http://code.google.com/p/jebgl/) also vulnerable to the same points?

Should Microsoft stop supporting Java applets for the same reason?


that's not true. most comments agree with you. [edit: and even if it was earlier, only a couple are anti-MS in total]


Not when I posted the comment.


This is why I'm absolutely terrible at security considerations. When I look at webGL, I think what's the problem? So what if you have direct access to the GPU?

My naive view of the graphic card is: shader instructions -> VIDEO CARD -> PIXEL DATA

Shader instructions are a limited to a specified function set directed at transforming and calculating numbers. What possible risk can a calculator represent?

Video Card is a hardware device that simply implements the calculator language. It only has access to the numbers and data it was supplied. It crunches the numbers, and then returns pixel data. The video card is a black box, numbers go in, numbers come out.

Pixel data is just a set of numbers that represent color. You take the pixel data and you send it to the monitor.

Honestly, what could possibly go wrong?

Seriously, I have a hard time comprehending the "surface" are of the attack. I'm familiar with ideas like memory buffer overflow attacks--but I mean video cards have their own dedicated memory, even if you manage to read some memory outside of your allocated block--you'd only be getting numbers from the video card memory which is just geometry definitions...

Actually, just now typing that last paragraph I think I figured out why gaining access to the GPU memory would represent a security concern. I suppose if that memory contained screen pixel data, it could be used to "read" what was on the screen via some form of OCR? Or, perhaps maybe depending on the OS implementation, more than just "pixel" data may reside in the GPU memory.

Gr, it's really quite frustrating that these stupid security issues keep getting in the way of forward progress--or more so that large company vendors are back pedaling simply because it's a "hard" problem to tackle.


Read these papers:

http://www.contextis.com/resources/blog/webgl/

http://www.contextis.com/resources/blog/webgl2/

To learn more about WebGl and security. Goes into more depth than that article.


That is extremely informative. How did you find it?


I saw that this story link was really just a recap of an original source (which appears to be down for maintainance now). The original source linked to these two papers, which upon a quick read had some content that I wasn't familiar with, so thought it might be news to others too.

In general, when I see blog recaps I almost always go to the source. This blog entry, like most good ones, will link to their sources.

I wish I could say it was because I'm an expert on OpenGl and security. I'm really just a rabid link follower.


Honestly, what could possibly go wrong?

The biggest issue is that GPUs can do DMA to read and write directly in main RAM. This can be exploited to overwrite arbitrary kernel data structures, for example.


But before the code gets to the GPU it has to be compiled and that's the job of the GPU drivers which are know to crash often. Video card drivers are the problem. If you can manage to make them crash in a predictable way then you have a way to crash the machine and I guess reboot the box on windows. Who would have thought that a small program to do graphics manipulation could be used to reboot a box? I mean something is wrong here on many levels.

My WebGL app has frozen a few machines in the past with random bugs that I haven't figured out yet. I tell my users to be careful but I know someone out there knows how to do it predictably.

But you know what, it's about time Nvidia/ATI/Intel/others pull their own weight and fix the damn issues with their drivers. If it can be done with WebGL it can be done with native applications. Flash Molehill probably too.


I'd be more worried that the drivers run in kernel mode and this could actually allow a new class of root-level arbitrary code execution vulnerabilities.


That's definitively an issue. Is there a way around graphic drivers in the kernel? Userspace graphics drivers could be interesting if they were possible.


AFAIK both Windows (WDDM) and Linux (DRI) use userspace drivers. http://en.wikipedia.org/wiki/Windows_Display_Driver_Model


You're definitely correct about Windows (it was something I was excited about in Vista), but I'm not sure about Linux. Looking at the Nvidia GLX drivers for my Linux machine, it still seems like quite a hefty kernel module. I can't really examine it, though, because it's not open source.


Actually the part of it that runs in kernel mode does have it's source visible. It's part of the standard packaging provided by NVIDIA so that it can work on many different kernel versions. For the most part their driver is also userspace and the kernel driver exposes a communications pathway to the userspace. That's not to say I know how robust that pathway is, but they have separated things somewhat.

The open source drivers also follow this kind of model where the kernel modules handle hardware access and memory management but not a whole lot more. Mesa (and Gallium 3D) all work in userspace to handle compiling things to the native formats of the cards.


User space drivers are possible, the problem is efficiency nosedives. The premise of microkernels[1] is that the kernel itself is "trivially" small and everything else runs as a userspace task.

[1] http://en.wikipedia.org/wiki/Microkernel


Isn't that what microkernels are all about? http://en.wikipedia.org/wiki/Microkernel


>it's about time Nvidia/ATI/Intel/others pull their own weight and fix the damn issues with their drivers

It's been about time for 15 years, and they still haven't. Video card drivers are #1 reason for windows crashes. Microsoft put a lot of pressure on these companies (and Microsoft knows pressure) and it still didn't help.


"Honestly, what could possibly go wrong?" "it could be used to "read" what was on the screen via some form of OCR?"

That is. They can:

a)Make your computer "impossible" to use(until you plug the computer power off and restart), what they cal Denial of Service

b) They can get access to your super important and super confidential screen information, like your bank account or your plans to dominate the world.

I agree with you,not a big deal to me, I find WebGL to be very useful, like google body(use google Chrome, Firefox is slower): http://bodybrowser.googlelabs.com/body.html

Here common sense applies, if you are going to use financial services just use them. Visit WebGL sites when you are not doing something serious and problem solved. Also while it is technically possible to do, it is really difficult to exploit, I program GPUs.

I understand Microsoft position here too, they don't want people suing them because someone manage to get their financial data and stole their identity and money while watching cool webGL animations, but I think you can trust the big guys (google) on this one, they are not going to steal your financial data, in some cases they already have it :-).


I think their concern is that the hardware drivers have bugs in them that could lead to remote compromise, not that access to the GPU. Microsoft knows something about this, as the execution of untrusted/unsigned shader code allowed the Xbox 360 to run unsigned code back in the day: http://en.wikipedia.org/wiki/Free60


From the WebGL spec:

"It is possible to create, either intentionally or unintentionally, combinations of shaders and geometry that take an undesirably long time to render. This issue is analogous to that of long-running scripts, for which user agents already have safeguards. However, long-running draw calls can cause loss of interactivity for the entire window system, not just the user agent.

In the general case it is not possible to impose limits on the structure of incoming shaders to guard against this problem. Experimentation has shown that even very strict structural limits are insufficient to prevent long rendering times, and such limits would prevent shader authors from implementing common algorithms.

User agents should implement safeguards to prevent excessively long rendering times and associated loss of interactivity. Suggested safeguards include:

Splitting up draw calls with large numbers of elements into smaller draw calls. Timing individual draw calls and forbidding further rendering from a page if a certain timeout is exceeded. Using any watchdog facilities available at the user level, graphics API level, or operating system level to limit the duration of draw calls. Separating the graphics rendering of the user agent into a distinct operating system process which can be terminated and restarted without losing application state. The supporting infrastructure at the OS and graphics API layer is expected to improve over time, which is why the exact nature of these safeguards is not specified."


They have a point. OpenGL was not designed with security in mind (same goes for Direct 3D). Prior to WebGL there wasn't really a need for security, since applications making use of OpenGL would typically be considered trusted.

Index buffers into vertex arrays, for example, are not bounds checked in OpenGL. This makes perfect sense in terms of efficiency, but can lead to code execution in adversarial settings. I'm sure there are many more cases where choices were made in favor of speed, and where security was simply not a concern.

Having said that, all these things can certainly be fixed. For example, Microsoft could add a security layer that does all the bounds checking prior to passing on the commands to the driver (so securing all the drivers is not necessary). Makes me wonder how Chrome or Safari handle this. Anyone know more?


They do bounds checking and shader verification using ANGLE.

http://code.google.com/p/angleproject/


The shader verification isn't perfect (e.g. mishandling unicode), though, and is only syntax deep -- if you find bugs in the compiler based around, say, too many function calls nested together (e.g. foo(bar(baz(...))) ) then it will go through ANGLE with no changes. It's a good effort, but needs a lot of work. I've been fuzzing it off and on for that reason.


I'm not a fan of most MS products, and I get a bit of NIH syndrome vibe from the article. However, the security argument is spot on.

Perhaps browser developers have come up with strong countermeasures, but experience shows the state of OS and app development today is still ineffective in the area.

I don't allow Java, most Javascript, and most Flash to run on my system either, so its nothing personal. With direct access to hardware, flakey video drivers, and no thought to security, the possibilities for disaster are even greater with WebGL. Remember Windows has graphics drivers in the kernel.

Also, as long as browsers continue to execute objects on page load instead of behind a play button security breaches will continue.


"Remember Windows has graphics drivers in the kernel"

Post-XP, very little of the graphics driver is in the kernel (just the KMD bits). It is mostly all in user space.


Ok, although "when there's a will, there's a way." ;)


Why is deferring execute to a button any better? A user that visits a site is implicitly trusting it; it doesn't take additional trust to click a link (or hover over a link, for that matter). Sandboxing js to only user explicit events will take us back to the browser stone ages.


It is much better security/performance-wise to only run programs you intend to.

There are sites you visit every day like HN, and there are those sites you rarely if ever visit. Perhaps a link from a link to one of the stories here.

Not allowing those sites to autoexecute code is very powerful. Hell, not allowing tracking code to run at trusted sites is just as powerful.

With Flashblock et all, I can go to strange urls and not have to worry about dangerous, performance sapping shit getting loaded. There is a whitelist for sites/scripts I choose to enable.

If you've never used NoScript/Ghostery give them a try. Your eyes will be opened to the sheer amount garbage loaded even on "trustworthy" sites.


This is absolutely correct. Exposing video drivers to untrusted code is bonkers. Unfortunately, the ad-hominem attacks against Microsoft will likely prevail here.


Cue the chorus of people saying this is because Microsoft sucks etc etc.

Ignore the fact that Microsoft has spent more time and resources than any technology company in the world focusing on web related security. Mind you that is not an endorsement of their track record, but a statement with respect to the reality on the ground.


> Microsoft has spent more time and resources than any technology company in the world focusing on web related security.

Mostly because they had to. If others spent less it could be because they had a smaller vulnerable surface to begin with, or simpler codebases.


You evidently haven't seen the massive codebases from the Mozilla or Webkit camps, or the huge number of vulnerabilities therein in recent years. IE is a huge piece of shit, but MS has done a lot for web security. Mind you, I think Google's efforts have been more fruitful, but writing off MS's effort is silly.


The way IE is reliant on Windows and vice-versa, you can't really draw a clear line separating Windows and IE (at least not security-wise). It's very different with Mozilla and Webkit codebases, where the clean separation exists (as they are both multi-platform). I believe IE's base also carries lots of cruft from previous versions, something that complicates matters even more.

Microsoft did a lot, quite possibly because they had a whole lot to do. No other company has an OS that large (Windows is huge) so tightly coupled to a browser.


That's pretty much complete BS. IE is no more dependent on Windows than Chrome is, but yes, various bits of the Windows UI are dependent on the Trident engine in IE. Either way, IE is no more inherently insecure than any other browser, despite its many vulnerabilities in the past. In addition, security improvements in the OS itself affect not only IE, but any browser running on top of it, making your claims even more silly.

I know you like to hate on MS (how many times have we been over this in the past few months?), but really, your claims here are completely unfounded. IE has a completely shit track record, but they have done a whole, whole lot in recent years to improve its security. Look at the many, many vulnerabilities in other browsers/layout engines compared to IE in recent years -- it's pretty startling. There are many reasons to hate on IE, but the effort put into securing it -- and the way they've gone about securing it -- is definitely not on of them.


The part you seem not to get is that if you have a bad engineered product, it's only natural that you have to invest more than those who don't have poorly engineered products, regardless of how bad your track record ends up being.

As for your claim IE is no more dependent on Windows than Safari or Firefox, I will have to take your word for it, as it has been a long time (almost a decade) since I last inspected IE and what interfaces it used from the underlying OS. At that time, they both seemed very intimate.

And yes. I derive a lot of fun from observing Microsoft.

As for investments in an OS affecting all software running on it, it all depends on the fix not being the introduction of an improved API, as software using the old one will remain vulnerable.


What? I'd love a citation on that. And I've spent a lot of time trying to make a baby by myself, but that doesn't mean I've been the least bit successful.

Seriously, come on. Where is this evidence that Microsoft has "spent more time on web security" than anyone else? Their track record sure doesn't support it. Is there a competition among the big-3 to compare amount of time spent on web security?

In fact, the fact that Microsoft supposedly spends so much time on web security and continues to fail so bad makes me feel much worse about their opinions on the security of WebGL. This is also the company, mind you, that introduced the decade-long nightmare of ActiveX.


I'm actually a big fan of several Microsoft products, and think their stand on WebGL is completely right.

OEM video card drivers have a history of already being unstable. In all likelihood, this represents an attack vector that, up until this point, I hadn't even thought of.

That said, it highlights something that has become somewhat of a trend with Microsoft. They seem to have decision making in almost every area being done by different teams who don't talk to one another. While I agree that OpenGL isn't "safe", what about ActiveX? Isn't allowing a browser to execute arbitrary code directly against the processor (regardless of whether or not its "signed" and the user had to click a few dialogs), a far greater threat? I'd love to see a press release that indicated they were ripping that functionality out of all currently supported browsers. What about Silverlight? I thought that had elements of hardware acceleration (perhaps it's protected enough, who knows?).

Kudos, to them, for taking a well reasoned stand for security in an application type that is routinely used to attack users, but I'd like to see the other 'harmful' features addressed as well. For now, I find Firefox with NoScript and Adblock+ to be the safest bet.


Back in 2000, I remember that a PC game I was working on, was slower in NT, than Windows 98 or was it ME. Explanation from Microsoft (they ware the publisher too) was that in NT they had to check every vertex index, whether it's out of bounds, otherwise one somehow might be able to read memory which he's not supposed to. And couple of other things too.

I guess it might be a valid concern. Though I would love if WebGL becomes adopted by everyone.


Yeah, sure. Silverlight is the future!


Actually, that's an interesting point. Silverlight allows pixel shader usage, which they view unsafe in WebGL. That'd be interesting to read SDL team's views on this.


Looks, like this is the answer: http://news.ycombinator.com/item?id=2663497


"Microsoft believes that WebGL will become an “ongoing source of hard-to-fix vulnerabilities” in its current form. "

If only they'd had the same insight with ActiveX, and all of the other attempts to turn Internet Explorer into a zero-install native application execution environment.


They have a point, but coming from the company that invented ActiveX, I find it rather funny :)


I would argue ActiveX gives MS a lot of credibility here. If they're saying it's insecure...


Actually, thanks to the backlash they got in Windows XP pre-SP2 times, they are very security-conscious now. Blessing in disguise.


I don't understand how running GPU exploits in a native application is anymore secure.


My biggest question with webgl is who is it targeted at? Certainly it can't be game developers because javascript is still way to slow to manage a game world and vector logic. Pipelining assets is also a big problem in html and I just can't see many devs taking it seriously as a way to deliver 3d games.


Check out my application: https://brainbrowser.cbrain.mcgill.ca (sorry if it's a little hard to use, I'm working on documentation but my user base usually knows what all the stuff means.)

It's in use by neuroscientist for a varity of stuff. I have users in almost all continents. (I need people in Australia and African Countries ;p)

There are many games that could be done with WebGL as it is today. Some little RTS games that could be fun. Zynga has proven that you don't need anything fancy to make money from games.

As for other applications, well I think http://ro.me is a great example of something that can be done with WebGL. I bought the CD just because of that video. It was brilliant marketing.


Your app is a great example as the requirements are very finite. Web games on the other hand seem better suited for 2D, since the requirements for 3D games are much larger.


You are there definitively some things that will need to change for 3D games to be possible in WebGL. LocalStorage limits (unless using the Chrome WebStore) is preventing games from store let say 300mb of data into your browser for later use. For bigger games we are talking about GB. The audio api is also in flux right now and isn't ready for prime time in my view but it's getting there. WebCL is another thing that could help. Game physics will probably make good use of WebCL.

But at the same time, games that can run on your iPad should be able to be made in WebGL with not to many issues. I'm looking at games like the new Age of Empires online (since I played that not to long ago). I feel that it's a game which could be implemented using WebGL and a larger local storage limit and proper WebSocket support.

I'm sure you will find that I'm very optimistic but my app barely pushes the capabilities of WebGL and new javascript APIs.


You can preload the assets and store them in the html5 app cache. If you've seen the webgl Quake 2 port you know that javascript is fast enough to do games, at least the lighter ones, like the stuff telltale makes.


WebGL can be the alternative to Flash games on the web. The performance can be better and the graphics more advanced.


But gpu accelerated 2D Canvas should provide that alternative, 3D games are another matter entirely.


I'm using it for a couple things. Initially I was just writing demo code to play around with it and take advantage of the ease of collaboration that the web gives you. Now, however, I'm building a MVP of an idea I've had in mind for a while where you can design a figurine of your kid/friend/partner/whatever and get it 3d printed -- WebGL makes it simple and easy.


Possibly off topic, possibly related, but can anyone explain to me how Google's Native Client (NaCl) is different from ActiveX and Flash? Sounds like a security exploit waiting to happen.


The whole point of NaCl is that it provides a safe sandbox for running verified native code - the Wikipedia article has more on this: http://en.wikipedia.org/wiki/Google_Native_Client

Google ran a security contest to find problems with it, which turned up a few. I'm not sure what the status is at the moment with regards to security: http://code.google.com/contests/nativeclient-security/faq.ht...


If your GPU has access to the same memory your OS is running in and you can make it run your own code, isn't the machine vulnerable?


Looks like O3D would have been a better approach after all.

Did Google back the wrong horse?


As far as I know this would also have been possible in o3d. My current app,stated out on o3d but I think the webgl move was smart. Now if they could maintain the o3d-webgl library they wrote for the change, i would be really happy. I feel that they abandoned the o3d crowd and a pretty good library.


So… "native HTML5" in IE10 doesn't include 3D graphics?


If I think about it I really don't want to expose ring0 code (gfx drivers) to the web.


Wow. Could they be more transparent? This couldn't have anything to do with Direct X, could it?


It's not about whether it could affect Direct X or not - one game crashing your GPU due to some unintentional bug code (there's no incentive on game publisher's side to make it intentional) is not the same as the Web, where anybody can push something to your browser that exploits hole in some graphics card driver. Microsoft has a point. They do not explain, however, how Silverlight is invincible in the same situation.


Exactly. TFA: "Attacks that may have previously resulted only in local elevation of privilege may now result in remote compromise."


Whatever. Whoever makes WebGL games will benefit from 50% of the total browser market share, and this market share will keep increasing as I doubt the new versions of IE (9/10) will replace the older IE browsers, especially since the new ones only work on certain Windows versions.

Whether Microsoft embraces WebGL or not, it's irrelevant, because they'd only add like 5% market share they have with IE9, anyway. I think WebGL developers can safely ignore the IE9/IE10 markets.


But can Google and Mozilla ignore the security concerns that are a inherent part of WebGL?


No, they can deal with them. The only inherent thing about security concerns exposed by WebGL is that there will be them, not that they can’t be fixed.


For those who think that HTML/CSS/JS will replace native apps, just take a look at Microsoft, Apple and Google (yes, Google - no thanks for ditching the open standard h264 on Chrome).

Standards, when they threaten to disrupt existing powerful players, will be ignored, delayed or sabotaged.

Thus innovation that requires standards-based clients even with nimble outfits like Apple, Google, Mozilla, and Facebook pushing the envelope, will take much longer than with a cohesive, well-driven native app platform.


>thanks for ditching the open standard h264 on Chrome

I think that someone may have lied to you when they explained the words "open" or "standard".

Also, why am I "looking at" MSFT/AAPL/GOOG for in relation to HTML/CSS/JS apps? They fact that many are still native? Google's apps for iOS are web based, and I'm sure iCloud apps for Android will be web based. Microsoft just got done demoing HTML5/CSS3 applications for Windows 8.

Your comment is all over the place without actually saying much or really making any legitimate claims.


> I think that someone may have lied to you when they explained the words "open" or "standard".

h.264 is not royalty free, but it is open and a standard (from Ars: http://arstechnica.com/web/news/2011/01/googles-dropping-h26... )

"In the traditional sense, H.264 is an open standard. That is to say, it was a standard designed by a range of domain experts from across the industry, working to the remit of a standards organization. In fact, two standards organizations were involved: ISO and ITU. The specification was devised collaboratively, with its final ratification dependent on the agreement of the individuals, corporations, and national standards bodies that variously make up ISO and ITU. This makes H.264 an open standard in the same way as, for example, JPEG still images, or the C++ programming language, or the ISO 9660 filesystem used on CD-ROMs. H.264 is unambiguously open."


That particular author has a chip on his shoulder about H.264 and WebM. I would consider taking anything he says on the matter with a grain of salt. His calling anything "unambiguously open" when the word "open" itself is so ambiguous is very poor rhetoric.

But if you look for precise definitions of the phrase "open standard" you might find that H.264 fails 14/16 of the various definitions offered by governments and standards bodies on the wikipedia page for that term. All failing for the same reason of charging patent fees. One of the other definitions is a historical accident that I doubt the same body would stand by today, or at any point in the last 5 or so years. The final definition, the one that it passes, just so happens to be written by patent attorneys of the people who developed H.264.


I guess I'll blame arstechnica for the misunderstanding then. I'm sorry, but I have an exceedingly difficult time calling anything "open" when they sit around and sue people or make promises not to sue them for arbitrary windows of time, due to the fact that they (the consortium) hold patents that they use to troll for profits.


who gives a shit about the quality of his comment.

your mother said to get off the computer


I don't understand. Did I offend someone, was I out of line?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: