rendering the DOM to a canvas

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

rendering the DOM to a canvas

Paul Frazee
I want to start playing with the Oculus Rift and Web software in the near future, and re-implementing HTML rendering in WebGL is a black hole I want to avoid. If the Web has any possible clear advantage over any other 3d platform, it's a well-established and well-featured backlog of 2d interfaces. (The on-demand networked VM is pretty good too.) Point is, it's not something I want to implement from scratch, but it's also not something I want to drop off the roadmap.

What I've found so far: html2canvas and rasterHTML (the latter of which uses the SVG trick) are close, but screwy in their own ways, and far less performant (try getting input events going or rendering the cursor blink). Using THREE.js, it's possible to do a "2pass render" (a webgl canvas overlaid on divs that are 3d transformed by css) but the Oculus Rift requires the scene to be rendered twice, then have a pixel shader applied to counter lens distortion.

Some thoughts on solutions:

 - A fully-featured 3d html & css toolset. I think this could work, but it will take time and specs.
 - A "read-safe" buffer for marking areas of the canvas which can't be copied out, so that HTML rendering is secure enough for use. Seems like this would have been considered by now.
 - A toolset for rendering HTML using Javascript. If some of the internal APIs which the browser uses to calculate layout and handle interactions were made available, it might make a custom HTML renderer trivial enough to do in the 3d application. (To a degree, this might be done by rendering to an off-screen div and reading pixel data, but that's not efficient.)

If anybody has any pointers for working on this, or if there's a channel for developing a fix, I'd appreciate the help. Thanks!

dev-tech-dom mailing list
[hidden email]