JS-Implemented Image Codecs

classic Classic list List threaded Threaded
21 messages Options
12
Reply | Threaded
Open this post in threaded view
|

JS-Implemented Image Codecs

Bobby Holley-2
I've spoken with various people, but I can't find any large prior
discussions about this issue. If I'm behind the times, please link me to
the relevant discussion.

I think we should consider speccing and implementing a web-exposed
mechanism for implementing image codecs in JS. I'm imagining something
along the lines of <link rel="codec" mimetype="image/foopy" src="
http://cdn.com/foopy/codec.js">. The codec would be fetched (either from
the network or from the cache) and instantiated in a sandboxed worker (or
several), which would decode buffered source data into a pixel buffer. We
could give these worker scopes a very limited API (no XHR etc), and
sidestep a number of security concerns by mandating that cross-origin loads
for non-builtin image types are cookie-less.

Given that we're all about the extensible web these days, this seems like a
much better use of resources than picking, implementing, and
forever-supporting new image formats. JS is fast enough today that we
should let the web decide.

Thoughts?
bholley
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Anne van Kesteren
On Mon, Oct 28, 2013 at 9:46 AM, Bobby Holley <[hidden email]> wrote:
> Thoughts?

See https://groups.google.com/d/msg/mozilla.dev.platform/BKa6rzcKvdo/pby2siSuVkUJ
for some from roc.


--
http://annevankesteren.nl/
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Florian Bender
In reply to this post by Bobby Holley-2
On Monday, October 28, 2013 10:46:30 AM UTC+1, Bobby Holley wrote:
> The codec would be fetched (either from the network or from the cache) and
> instantiated in a sandboxed worker (or several), which would decode buffered
> source data into a pixel buffer. We could give these worker scopes a very
> limited API (no XHR etc), and sidestep a number of security concerns by
> mandating that cross-origin loads for non-builtin image types are cookie-less.

I was about to draft a message to the WHATWG mailing list proposing a sandboxed worker API (in part inspired by the node sandboxed VM, pending feasibility), so this is a totally new use case I did not consider yet …

To allow more efficient decoding, the Worker should have access to video hardware (i.e. GPU). This is possible via Canvas, WebGL, and esp. WebCL – which and how should these APIs be exposed? How does this compare to the WorkerCanvas proposal on the WHATWG list?
SIMD/PJS may also be interesting APIs to expose.
--> Which APIs do you want to expose at all?
Do you want to reuse the I/F workers currently have, or do you need new ways for message passing etc.? (IMHO, there should be no new primitives/APIs for the communication, excluding higher level abstraction APIs, thus fully JS shim-able, to ease development.)


> I'm imagining something along the lines of
> <link rel="codec" mimetype="image/foopy" src="http://cdn.com/foopy/codec.js">

When should this be honored? Only when the UA cannot handle the media type?
How about container formats with different codecs (e.g. "video/webm" vs. "video/webm;codecs=VP9" for the video case; AFAIU this post is only about images ATM)? (The problem here is with "broken" codecs or when browsers implement such a format that was previously "shim"-ed.)
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Florian Bender
In reply to this post by Bobby Holley-2
On Monday, October 28, 2013 1:08:59 PM UTC+1, Anne van Kesteren wrote:
> See https://groups.google.com/d/msg/mozilla.dev.platform/BKa6rzcKvdo/pby2siSuVkUJ

On Sunday, October 13, 2013 11:17:34 PM UTC+2, Robert O'Callahan wrote:
> This is really tricky if you want the decoder to be able to handle
> non-same-origin images. The problem is that although we can isolate the
> worker at the API level, it's going to be almost impossible to prevent it
> from leaking information back to its origin via timing channels.

Is this also true if the worker only gets and passes the bits (typed array blob) or object URLs? I.e. the worker has no understanding of location and origin (and other metadata like timestamp, name, …, except the data from e.g. EXIF) of the blob it receives.

Is this a problem at all if the worker is unable to initiate connections (i.e. XHR etc. blocked, the worker can only do calculations)?


How does this affect passing back data that is not pixel data, e.g. EXIF? Would be nice if the decoder was able to do image processing and e.g. do OCR and shape detection, and pass the results back (via a meta JSON object?).
(For image processing / meta data capabilites that are not part of a standard like EXIF to be useful, the content needs a way to listen to decoder events resp. the decoder to pass additional data to the content, e. g. via the Image object/DOM. That leads to the question whether such image processing should not be covered by such a sandboxed decoder and instead be done through a regular worker.)
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Benjamin Smedberg
In reply to this post by Bobby Holley-2
On 10/28/2013 5:46 AM, Bobby Holley wrote:

> I've spoken with various people, but I can't find any large prior
> discussions about this issue. If I'm behind the times, please link me to
> the relevant discussion.
>
> I think we should consider speccing and implementing a web-exposed
> mechanism for implementing image codecs in JS. I'm imagining something
> along the lines of <link rel="codec" mimetype="image/foopy" src="
> http://cdn.com/foopy/codec.js">. The codec would be fetched (either from
> the network or from the cache) and instantiated in a sandboxed worker (or
> several), which would decode buffered source data into a pixel buffer. We
> could give these worker scopes a very limited API (no XHR etc), and
> sidestep a number of security concerns by mandating that cross-origin loads
> for non-builtin image types are cookie-less.
>
> Given that we're all about the extensible web these days, this seems like a
> much better use of resources than picking, implementing, and
> forever-supporting new image formats. JS is fast enough today that we
> should let the web decide.

Are you saying that we should do this *instead* of a next-gen format
with a binary decoder? Have we prototyped something like this to see if
it is in fact as fast and memory-efficient as a binary decoder? We'd
need to make sure that it has feature parity and can do things like
progressive decoding.

Why do you think this is important to do now? We have a pretty large
backlog of features already, and this seems relatively less important in
comparison.

--BDS

_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Bobby Holley-2
In reply to this post by Florian Bender
On Mon, Oct 28, 2013 at 1:24 PM, Florian Bender <
[hidden email]> wrote:

> To allow more efficient decoding, the Worker should have access to video
> hardware (i.e. GPU). This is possible via Canvas, WebGL, and esp. WebCL –
> which and how should these APIs be exposed? How does this compare to the
> WorkerCanvas proposal on the WHATWG list?
> SIMD/PJS may also be interesting APIs to expose.
> --> Which APIs do you want to expose at all?
> Do you want to reuse the I/F workers currently have, or do you need new
> ways for message passing etc.? (IMHO, there should be no new
> primitives/APIs for the communication, excluding higher level abstraction
> APIs, thus fully JS shim-able, to ease development.)
>

I don't have an opinion on this.


> > I'm imagining something along the lines of
> > <link rel="codec" mimetype="image/foopy" src="
> http://cdn.com/foopy/codec.js">
>
> When should this be honored? Only when the UA cannot handle the media type?
>

Probably. I could also see the value in letting authors alter codecs for
existing formats, but it's also probably important to allow UAs to
transparently deploy builtin implementations for formats that become
popular. We could allow both, I guess.


On Mon, Oct 28, 2013 at 2:32 PM, Benjamin Smedberg <[hidden email]>wrote:

> Given that we're all about the extensible web these days, this seems like a
>
>> much better use of resources than picking, implementing, and
>> forever-supporting new image formats. JS is fast enough today that we
>> should let the web decide.
>>
>
> Are you saying that we should do this *instead* of a next-gen format with
> a binary decoder?


Yes. Or at least in parallel. If see a path towards ending an entire class
of format wars, I think we should pursue it rather than letting ourselves
get sucked into each battle.


> Have we prototyped something like this to see if it is in fact as fast and
> memory-efficient as a binary decoder?


Given what we've proven with asm.js, I suspect that we can achieve a pretty
high level of performance. Peak memory usage might be a different story,
but thankfully this is a transient quantity and probably dwarfed by the
size of the decoded image buffers.


> We'd need to make sure that it has feature parity and can do things like
> progressive decoding.
>

Agreed.


> Why do you think this is important to do now? We have a pretty large
> backlog of features already, and this seems relatively less important in
> comparison.


I don't know where it stands on the absolute scale of priorities, but it
certainly seems more important to me than picking sides in the JPEG-XR vs
WebP debate. Neither of those formats is going to take off without buy-in
for Mozilla, which means that there's no real competitive pressure on that
front - it's purely about making the web a better place. And if we believe
in the extensible web, I think content-supplied codecs are a much more
powerful tool in the long run.

bholley
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

David Bruant-5
Le 28/10/2013 15:51, Bobby Holley a écrit :
> On Mon, Oct 28, 2013 at 1:24 PM, Florian Bender <
> [hidden email]> wrote:
>> Have we prototyped something like this to see if it is in fact as fast and
>> memory-efficient as a binary decoder?
>
> Given what we've proven with asm.js, I suspect that we can achieve a pretty
> high level of performance. Peak memory usage might be a different story,
> but thankfully this is a transient quantity and probably dwarfed by the
> size of the decoded image buffers.
Agreed. JS is ready when it comes to perf.

>> Why do you think this is important to do now? We have a pretty large
>> backlog of features already, and this seems relatively less important in
>> comparison.
> I don't know where it stands on the absolute scale of priorities, but it
> certainly seems more important to me than picking sides in the JPEG-XR vs
> WebP debate. Neither of those formats is going to take off without buy-in
> for Mozilla, which means that there's no real competitive pressure on that
> front - it's purely about making the web a better place. And if we believe
> in the extensible web, I think content-supplied codecs are a much more
> powerful tool in the long run.
Yes! Thank you <3
If allows anyone to experiment with image formats. "Innovation happening
at the edges" as they say. Afterwards, web browsers can see which image
formats emerge and implement more efficient codecs in C++ or Rust or
whatev's.
It's annoying to have ~400 comments on WebP bugs from frustrated web
devs. This proposal would allow anyone to do whatever they want without
having to wait for web browsers.

Side note: works for audio and video too.

> The codec would be fetched (either from
> the network or from the cache) and instantiated in a sandboxed worker (or
> several), which would decode buffered source data into a pixel buffer. We
> could give these worker scopes a very limited API (no XHR etc), and
> sidestep a number of security concerns by mandating that cross-origin loads
> for non-builtin image types are cookie-less.
I've read something about "Service Worker" recently and I see they are
close to what you describe. I haven't read everything about it, so I
don't know how they differ. Anyway, I recommand taking a look:
https://github.com/slightlyoff/ServiceWorker

ParallelJS comes to mind too.

In any case, the how of this feature is likely to be different than what
we've had so far on the platform, so if the idea takes off, I recommand
posting at some early point to public-script-coord to ask for
ECMAScripters opinions (concurrency/parallelism will be a strong theme
of ES7 so they want to know about these sort of use cases)

David
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Justin Dolske-2
In reply to this post by Bobby Holley-2
On 10/28/13 6:32 AM, Benjamin Smedberg wrote:

> On 10/28/2013 5:46 AM, Bobby Holley wrote:
>> I've spoken with various people, but I can't find any large prior
>> discussions about this issue. If I'm behind the times, please link me to
>> the relevant discussion.
>>
>> I think we should consider speccing and implementing a web-exposed
>> mechanism for implementing image codecs in JS.
>> [...]
>
> Why do you think this is important to do now? We have a pretty large
> backlog of features already, and this seems relatively less important in
> comparison.

I'm wondering the same. If there are some simple changes we can make to
enable decoders supplied by _add-ons_, that might be useful for helping
to experiment with new formats. But overall the need to add new formats
to the web seems pretty rare (although we happen to be living in one of
the moments where there are some options).

Seems like a site that wants to experiment with a compelling new format
could get most of the benefit today with Canvas+XHR+JS. It would be a
bit hacky and have some limitations, but would be a fine way to
incrementally prove the merits of the format, and get things rolling
without waiting on browsers vendors at all.

Justin
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Steve Wendt
On 10/28/2013 4:52 PM, Justin Dolske wrote:

> I'm wondering the same. If there are some simple changes we can make to
> enable decoders supplied by _add-ons_, that might be useful for helping
> to experiment with new formats.

Everything old is new again?
https://bugzilla.mozilla.org/show_bug.cgi?id=18574

_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Mike Hommey
In reply to this post by Justin Dolske-2
On Mon, Oct 28, 2013 at 04:52:23PM -0700, Justin Dolske wrote:

> On 10/28/13 6:32 AM, Benjamin Smedberg wrote:
> >On 10/28/2013 5:46 AM, Bobby Holley wrote:
> >>I've spoken with various people, but I can't find any large prior
> >>discussions about this issue. If I'm behind the times, please link me to
> >>the relevant discussion.
> >>
> >>I think we should consider speccing and implementing a web-exposed
> >>mechanism for implementing image codecs in JS.
> >>[...]
> >
> >Why do you think this is important to do now? We have a pretty large
> >backlog of features already, and this seems relatively less important in
> >comparison.
>
> I'm wondering the same. If there are some simple changes we can make
> to enable decoders supplied by _add-ons_, that might be useful for
> helping to experiment with new formats.

Note that the code that existed and partially allowed this has been
removed.

Mike
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Nicholas Nethercote
In reply to this post by Justin Dolske-2
On Mon, Oct 28, 2013 at 4:52 PM, Justin Dolske <[hidden email]> wrote:
>
> But overall the need to add new formats to the
> web seems pretty rare (although we happen to be living in one of the moments
> where there are some options).

If it wasn't so hard to add support for a new format, there'd be a lot
more experimentation, and we may well see interesting new cases.  I
like the idea.

Nick
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

David Rajchenbach-Teller-2
Generally, I like the idea, but what would this bring us that canvas +
xhr + js [ + workers ] don't already provide? If the only difference is
to use <img> style cross-origin loading instead of xhr-style loading,
I'm not even sure that it's something we want.

Cheers,
 David

On 10/29/13 5:10 AM, Nicholas Nethercote wrote:

> If it wasn't so hard to add support for a new format, there'd be a lot
> more experimentation, and we may well see interesting new cases.  I
> like the idea.
>
> Nick
> _______________________________________________
> dev-planning mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-planning
>


--
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Anne van Kesteren
On Tue, Oct 29, 2013 at 7:57 AM, David Rajchenbach-Teller
<[hidden email]> wrote:
> Generally, I like the idea, but what would this bring us that canvas +
> xhr + js [ + workers ] don't already provide? If the only difference is
> to use <img> style cross-origin loading instead of xhr-style loading,
> I'm not even sure that it's something we want.

It gives the web the power to deploy a new image format, without
having to reinvent CSS, HTML, SVG, and other places I missed that have
the ability to reference images. It also puts those images through the
same code paths as the browser puts them through, which gives them
per-document pinning of image URLs, proper CSP handling, caching,
decoding when needed, etc.


--
http://annevankesteren.nl/
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Justin Dolske-2
In reply to this post by Justin Dolske-2
On 10/28/13 9:10 PM, Nicholas Nethercote wrote:
> On Mon, Oct 28, 2013 at 4:52 PM, Justin Dolske <[hidden email]> wrote:
>>
>> But overall the need to add new formats to the
>> web seems pretty rare (although we happen to be living in one of the moments
>> where there are some options).
>
> If it wasn't so hard to add support for a new format, there'd be a lot
> more experimentation, and we may well see interesting new cases.

I'd rather suspect that the difficulty comes more from (1) it being an
established field with good existing options [JPEG is 20+ years old, yet
none of the new contenders seem vastly superior] and (2) the web browser
is just a tiny part of the whole ecosystem [your camera, image editors,
hosting sites like Flickr, and the myriad of other tools/uses].

You can experiment with new formats, cross-browser, _today_. It's not
perfect, but if there's pent-up interest I'd expect to see a fair amount
of such usage. Is there any? Ditto for closed-ecosystem usage (eg native
apps, console games, embedded systems) where there are no browser or
compatibility issues, and the developer has end-to-end control. What's
the story there? (I'm assuming not-great since I've not heard of it
being a thing, but I honestly don't know.)

Justin
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Bobby Holley-2
In reply to this post by Anne van Kesteren
On Tue, Oct 29, 2013 at 12:24 PM, Anne van Kesteren <[hidden email]>wrote:

> It gives the web the power to deploy a new image format, without
> having to reinvent CSS, HTML, SVG, and other places I missed that have
> the ability to reference images. It also puts those images through the
> same code paths as the browser puts them through, which gives them
> per-document pinning of image URLs, proper CSP handling, caching,
> decoding when needed, etc.
>

This. There's a lot of stuff that modern browsers do with images, and
reinventing all of that stuff is an uphill battle. This is why people are
still clamoring for WebP support despite the existence of stuff like WebPJS.

You can experiment with new formats, cross-browser, _today_. It's not
> perf·ect, but if there's pent-up interest I'd expect to see a fair amount
> of such usage.
>

I think there's a difference between experimentation and production usage.
I'd argue that the tools available today are good enough for the former,
but not for the latter.

Whether my proposal is good enough for the latter is an interesting
question. Talking to jst, it sounds like SIMD is pretty necessary to
achieve full performance parity with native decoders. We're working on
stuff in that direction, but it's not there yet. At the same time, it's not
clear to me whether we're talking 10% difference or 100% difference. It
would be interesting to study the state of the art on the decoding side
(emscriptened codecs and whatnot) to see what the performance looks like.

I don't claim that this is the highest priority thing we could work on. But
I do think it's higher priority than shipping binary support for any new
image format, and think that the web will be healthier in the long run if
we opt for extensibility here.

bholley
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Asa Dotzler-2
In reply to this post by Anne van Kesteren
On 10/30/2013 9:45 AM, Bobby Holley wrote:

> I don't claim that this is the highest priority thing we could work on. But
> I do think it's higher priority than shipping binary support for any new
> image format, and think that the web will be healthier in the long run if
> we opt for extensibility here.

I've heard just the opposite from some Mozillians, that a few
standardized formats are a lot better for the Web than uncountable
custom formats. I've head this said of video/audio formats and image
formats.

If that's true, then this is not only not the highest priority, it
should be an anti-priority.

Am I missing a distinction here? I love the idea of a super-powerful VM
for the Web that can do "anything" but that doesn't feel to me in line
with making the Web easier to understand and participate in for most
people. A few easy and well described standards seem much more Mozilla
mission-y than opening the Web to an unknowable number of competing
non-standard media formats.

- A

_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Bobby Holley-2
On Wed, Oct 30, 2013 at 6:15 PM, Asa Dotzler <[hidden email]> wrote:

> I've heard just the opposite from some Mozillians, that a few standardized
> formats are a lot better for the Web than uncountable custom formats. I've
> head this said of video/audio formats and image formats.
>


>  that doesn't feel to me in line with making the Web easier to understand
> and participate in for most people. A few easy and well described standards
> seem much more Mozilla mission-y than opening the Web to an unknowable
> number of competing non-standard media formats.
>

Can you describe your reasoning for these conclusions?

I don't think it's very likely that the web will be overrun with custom
codecs. There's not a huge incentive to do so. This mechanism is mostly
designed to allow next generation formats to take hold without requiring a
consensus by all the browser vendors (which, as we've discovered, can take
more than a decade).

If we don't expect image formats to evolve much in the future, this may not
be worth the effort. At the same time, Brendan believe that custom JS video
codecs are the future [1], and the solution to some of the most pressing
problems on the web today. If we want to support those as first-class
codecs, the bulk of the machinery might be sharable.

bholley

[1] https://brendaneich.com/2013/05/today-i-saw-the-future/
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Steve Fink-4
In reply to this post by Asa Dotzler-2
On 10/30/2013 10:15 AM, Asa Dotzler wrote:

> On 10/30/2013 9:45 AM, Bobby Holley wrote:
>
>> I don't claim that this is the highest priority thing we could work
>> on. But
>> I do think it's higher priority than shipping binary support for any new
>> image format, and think that the web will be healthier in the long
>> run if
>> we opt for extensibility here.
>
> I've heard just the opposite from some Mozillians, that a few
> standardized formats are a lot better for the Web than uncountable
> custom formats. I've head this said of video/audio formats and image
> formats.
>
> If that's true, then this is not only not the highest priority, it
> should be an anti-priority.
>
> Am I missing a distinction here? I love the idea of a super-powerful
> VM for the Web that can do "anything" but that doesn't feel to me in
> line with making the Web easier to understand and participate in for
> most people. A few easy and well described standards seem much more
> Mozilla mission-y than opening the Web to an unknowable number of
> competing non-standard media formats.
You have to be careful to distinguish custom formats that need to be
implemented in the browser from custom formats that can be implemented
in content. Web devs have an understandable dislike for the former; many
times, that boils down to them needing to transcode and serve yet
another format that consumes space and bandwidth and cache capacity. So
if you ask people if they want more formats, the "hell no!" reaction is
probably mostly due to that.

Content-implemented formats have none of those drawbacks. You only serve
up your chosen format (ok, realistically with a fallback for low-end
clients). It's still more complexity, though, and it'd be tough to make
a custom format that is an overall win in latency, bandwidth, etc.,
especially for the first hit.
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Chris Double
In reply to this post by Asa Dotzler-2
Asa Dotzler <[hidden email]> writes:

> I've heard just the opposite from some Mozillians, that a few
> standardized formats are a lot better for the Web than uncountable
> custom formats. I've head this said of video/audio formats and image
> formats.

Format proliferation is bad if it means that some users miss out due to
not being able to use the format due to it not being implemented on
their platform or browser. If we had formats implementable in JS then
this argument goes away - users aren't as disadvantaged by the
additional format.

I think having hooks to implement image, video and audio decoders in JS
is a great idea and I look forward to work being done to support it.


--
http://www.bluishcoder.co.nz

_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
Reply | Threaded
Open this post in threaded view
|

Re: JS-Implemented Image Codecs

Florian Bender
In reply to this post by Bobby Holley-2
On Monday, October 28, 2013 10:46:30 AM UTC+1, Bobby Holley wrote:
> I think we should consider speccing and implementing a web-exposed
> mechanism for implementing image codecs in JS. […]
> The codec would be fetched […] and instantiated in a sandboxed worker […],
> which would decode buffered source data into a pixel buffer.

One more thing: In the interest of the user and cross-compat (e.g. photo editing apps), any custom codified image (resp. content) should be exportable into one of the "standard" formats (for images, that is JPG & PNG; I don't know how feasible this would be for video if/when this becomes a concern). So, e.g. context menu -> save image needs to offer both of these formats as a filetype.

Would this be an issue (because of encoding) for Mozilla?
_______________________________________________
dev-planning mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-planning
12