I want to start playing with the Oculus Rift and Web software in the near future, and re-implementing HTML rendering in WebGL is a black hole I want to avoid. If the Web has any possible clear advantage over any other 3d platform, it's a well-established and well-featured backlog of 2d interfaces. (The on-demand networked VM is pretty good too.) Point is, it's not something I want to implement from scratch, but it's also not something I want to drop off the roadmap.
What I've found so far: html2canvas and rasterHTML (the latter of which uses the SVG trick) are close, but screwy in their own ways, and far less performant (try getting input events going or rendering the cursor blink). Using THREE.js, it's possible to do a "2pass render" (a webgl canvas overlaid on divs that are 3d transformed by css) but the Oculus Rift requires the scene to be rendered twice, then have a pixel shader applied to counter lens distortion.
Some thoughts on solutions:
- A fully-featured 3d html & css toolset. I think this could work, but it will take time and specs.
- A "read-safe" buffer for marking areas of the canvas which can't be copied out, so that HTML rendering is secure enough for use. Seems like this would have been considered by now.
If anybody has any pointers for working on this, or if there's a channel for developing a fix, I'd appreciate the help. Thanks!