Porting protocol parsing to newer coding idioms

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Porting protocol parsing to newer coding idioms

Joshua Cranmer 🐧
Over the past week, I've been poking at porting NNTP to use a
promise-based approach instead of the protocol state machine that it
presently uses. There are 3 reasons for this:

 1. I wanted to see just how amenable C++ was to implementing promises.
 2. I want to move all of the protocol implementations off-main-thread.
 3. I want to stop using URLs to drive the state machine and actually
    use more direct function calls. (Seriously, look up how we do
    server-side search on NNTP to understand some of the insanity that
    can get exposed).

(Using URLs is particularly dumb for POP and SMTP since, well, there's
no actual URL scheme that could be conceivably used. news: URIs do
conceivably exist and actually make sense to implement as a URI handler).

As of right now, I've got a proof-of-concept patch that successfully
rips out most of the protocol state machine handling for promises. It's
currently at about a net deletion of 2000 lines of code, as it turns out
that the handling of multiline responses is quite verbose and repeated
in several different places. What I have right now satisfies only the
first goal of the process, and doesn't directly make any headway into
the second and third goals.

POP, IMAP, SMTP, and NNTP all share some broadly similar characteristics
as protocols: they are a UTF-8 control channel multiplixed with 8-bit
binary data in the same socket. Furthermore, STARTTLS is far more common
in these protocols than other ones such as HTTP, which means it tends to
be harder to find support in canned socket implemenatations. POP, SMTP
and NNTP also very much share alignment in their commands, relying
heavily on dot-stuffing and three-digit response codes, whereas IMAP
relies on binary blobs of exact count and a tagged and complex parsing
structure for commands; it's sort of the odd man out. While Thunderbird
doesn't itself support these protocols, MANAGESIEVE and IRC are also
both superficially similar in being a text-centric rather than binary
protocol, although I think MANAGESIEVE is purely UTF-8 and IRC is in the
charset hell of "guess what the server does." It's for these reasons
that I've often thought about implementing all of these protocols in
terms of a common "mail socket" API, although the exact structure of the
API I've vacillated on for a while.

Having done the current work on NNTP, I'm at a crucial juncture--trying
to decide what language to actually implement the protocol core in.
There are arguments to be made in choosing which language to write:

Rust: Networking protocols and parsing are the forté of Rust, and absent
any other concerns, writing this code in Rust would probably be the best
choice. Mozilla as of late appears to be of the opinion that this is the
sort of code that should be written in Rust rather than C++ or JS. But
mailnews code does not have any Rust code at present, and I suspect that
the number of developers who understand it well enough to maintain it,
perhaps even an emergency basis, is rather small. The integration of
system logic between Rust and C++, let alone JS, is also an issue: using
Rust's native networking API (or async network APIs such as tokio) is
not viable because of the issues with integrating with SSL sockets, and
handling the communication between the protocol implementation core and
the rest of the codebase is a path less traveled. XPIDL bindings for
Rust do exist, but it's unclear what commitment Mozilla has or does not
have to them.

JS: If networking and parsing is Rust's forté, it is the bane of JS's
existence. JS makes a strong distinction between binary data and strings
which makes it cumbersome to implement this functionality. Furthermore,
the options for off-main-thread work in JS are grim, especially when it
comes to setting up the sockets. There's also the drawbacks of JS not
having static errors and the propensity to swallow
important-for-developers errors which makes writing robust code here
challenging. The only real advantage JS has is... it makes it easy for
extensions to use, which is probably less true these days given the
trashing that Mozilla has done to extension APIs.

C++: What C++ has to recommend it is that it is the least work.
nsNNTPProtocol and nsMsgProtocol already form a solid foundation for
most of the socket work, and it's very likely that even if the protocol
implementations themselves were ported to a different language, the
sockets would still be constructed and managed in C++ instead. But the
big disadvantage is that C++ is by far the least ergonomic language to
use. It doesn't have async/await coroutines (unlike JS or, very shortly
if not already, Rust), and it is quite clear that there are safety
coding issues when attempting to use lambdas for promise callbacks: my
prototype implementation does have some nullptr dereferences because
I've been mixing utility objects as reusing member variables in
nsNNTPProtocol versus being passed in as arguments to promise constructors.

So at the end of the day, it looks like there are three bad choices
here: choose a language that's well-suited to the task, but is poorly
integrated into the environment; choose a language that's mediocre in
ergonomics but probably fails to meet all the objectives; or choose a
language that's poorly-suited to the task, but minimizes porting effort
and risk.

Another thing I've been thinking about is how to extend the API for the
message database to allow for use off-main-thread. Such an API could
grow to be an effective asynchronous database API that would allow the
replacement of our database with one that doesn't suck, but it would
have to rely on proxying to the current implementation. The needs of the
protocol implementations would be a good way to start framing some of
the necessary methods.

Thoughts/comments/questions/concerns?

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

Patrick Cloke-3
Hey Joshua,

An enlightening ready, as always. :) My feedback is super biased towards
chat protocols (thought I've done a bit with IMAP and SMTP outside of
Thunderbird).

On 2/23/19 1:06 AM, Joshua Cranmer 🐧 wrote:
> Over the past week, I've been poking at porting NNTP to use a
> promise-based approach instead of the protocol state machine that it
> presently uses.

Using promises vs. having a state machine has worked well for me for a
variety of protocols, I think it tends to end up with cleaner code,
especially when combined with generator-style callbacks (this would be
async functions in JavaScript, inlineCallbacks in Python+Twisted, ...).

I'm curious how a Promise-like API would look for some of these
protocols, most of our chat protocols essentially just react to incoming
data. Is NNTP/IMAP/POP/SMTP drastically different than this?

A cool way that we've handled state machines before (that I completely
forgot about until fixing some recent bustage) was to use a generator to
track through a negotiation authentication for XMPP. This lets you
encapsulate all of the logic in a single function without needing a
separate state variable.

> There are 3 reasons for this:
>
> 1. I wanted to see just how amenable C++ was to implementing promises.
> 2. I want to move all of the protocol implementations off-main-thread.
> 3. I want to stop using URLs to drive the state machine and actually
>     use more direct function calls. (Seriously, look up how we do
>     server-side search on NNTP to understand some of the insanity that
>     can get exposed).

I'd love to get the chat protocol implementations off the main thread as
well, but our socket code is completely in JavaScript, so not sure how
feasible that is.

> POP, IMAP, SMTP, and NNTP all share some broadly similar characteristics
> as protocols: they are a UTF-8 control channel multiplixed with 8-bit
> binary data in the same socket. Furthermore, STARTTLS is far more common
> in these protocols than other ones such as HTTP, which means it tends to
> be harder to find support in canned socket implemenatations. POP, SMTP
> and NNTP also very much share alignment in their commands, relying
> heavily on dot-stuffing and three-digit response codes, whereas IMAP
> relies on binary blobs of exact count and a tagged and complex parsing
> structure for commands; it's sort of the odd man out. While Thunderbird
> doesn't itself support these protocols, MANAGESIEVE and IRC are also
> both superficially similar in being a text-centric rather than binary
> protocol, although I think MANAGESIEVE is purely UTF-8 and IRC is in the
> charset hell of "guess what the server does." It's for these reasons
> that I've often thought about implementing all of these protocols in
> terms of a common "mail socket" API, although the exact structure of the
> API I've vacillated on for a while.

IRC we force the user to choose, it defaults to UTF-8 and my
understanding is that most networks essentially require UTF-8 now. I
don't believe that anyone is discussing of formalizing that however.
(Would be nice to have some telemetry on this...)

> Having done the current work on NNTP, I'm at a crucial juncture--trying
> to decide what language to actually implement the protocol core in.
> There are arguments to be made in choosing which language to write:
>
> Rust: Networking protocols and parsing are the forté of Rust, and absent
> any other concerns, writing this code in Rust would probably be the best
> choice. <...snip...>

My big concerns here would be hooking any of this up to Necko and/or
NSS. Does Mozilla do any socket code in Firefox using Rust?

> JS: If networking and parsing is Rust's forté, it is the bane of JS's
> existence. <...snip...>

Preaching to the choir here. Dealing with UTF-8 data actually isn't
terrible, but dealing with binary data is pretty rough. I think you
skipped an important benefit here though, which is that JavaScript has
built-in support for Promises + lots of syntactic sugar around them
(async functions) and an ecosystem to write tests easily (using add_task
for xpcshell).

> C++: What C++ has to recommend it is that it is the least work.

I've no opinion here.

> So at the end of the day, it looks like there are three bad choices
> here: choose a language that's well-suited to the task, but is poorly
> integrated into the environment; choose a language that's mediocre in
> ergonomics but probably fails to meet all the objectives; or choose a
> language that's poorly-suited to the task, but minimizes porting effort
> and risk.

This is mostly an unrelated thought, but do you have any thoughts on
whether it would be beneficial to implement some of the core parsing /
handling / whatever as essentially an external library? This might not
be possible initially, but separating our state machines from I/O as
much as possible would allow a couple of things:

* Easier to eventually move external to the code-base and treat as a
dependency. (This may or may not be a good thing, depending who you ask.)
* Easier to test since you do not need mock-servers.

It has been on my long-term to do list to perform this change to our IRC
code. See [1] for some thoughts on separating I/O from protocol parsing
related to Python + HTTP/2.

--Patrick

[1]
https://pyvideo.org/pycon-us-2016/cory-benfield-building-protocol-libraries-the-right-way-pycon-2016.html
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

Joshua Cranmer 🐧
On 2/25/19 3:48 PM, Patrick Cloke wrote:
> I'm curious how a Promise-like API would look for some of these
> protocols, most of our chat protocols essentially just react to incoming
> data. Is NNTP/IMAP/POP/SMTP drastically different than this?

POP and SMTP both essentially have a single push command that's,
respectively, "Is there new mail" and "I have a message for you,"
combined with the authentication/STARTTLS stuff that needs to be done
first, and a little bit of extra handling if you need to enable EAI or a
few other features. The high-level API here is essentially a single
function with a bajillion parameters.

NNTP is more interesting: there are a few different commands to try to
do (post, get news, search, list groups), and there are actually some
potential issues with a naive read: there can be 100,000 groups on a
server, which is enough to make the list groups command need to pause
several times to actually ensure GUI threads advance. NNTP is also
interesting in that it does lazy authentication: it waits for the server
to say "hey, you need to authenticate to do this" before attempting it.

But all three of those protocols are mostly client-pushing-to-the-server
rather than waiting for the server to do something. IMAP does both; it's
essentially a database synchronization protocol that didn't realize
that's what it was, and IMAP IDLE is used to wait for the server to
notify instead of having the client periodically poll.

> A cool way that we've handled state machines before (that I completely
> forgot about until fixing some recent bustage) was to use a generator to
> track through a negotiation authentication for XMPP. This lets you
> encapsulate all of the logic in a single function without needing a
> separate state variable.

Yeah, one of my goals was killing the various member variables needed to
keep track of all of this state. C++ doesn't have coroutines (well, it
does as of last week, but that's not implemented in a compiler we can
use yet), so I made do with lambdas and explicit functions and quite a
bit of gritting and bearing to handle loops. The XPAT code for examples
looks like this:

RefPtr<GenericPromise>
nsNNTPProtocol::SearchFolder(nsIMsgNewsFolder *folder,
     nsIMsgSearchSession *searchSession, const nsACString &searchData) {
   // Change to the given folder first.
   RefPtr<GenericPromise> searchPromise = ChangeGroup(folder);
   for (const nsACString &term : searchData.Split('/')) {
     nsCString copy(term);
     searchPromise = searchPromise->Then(
       GetCurrentThreadSerialEventTarget(), "XPAT term",
       [copy, searchSession, this](GenericPromise::ResolveOrRejectValue
value) {
         if (value.IsResolve())
           return SendSearchCommand(copy, searchSession);
         return
GenericPromise::CreateAndResolveOrReject(std::move(value), "");
       });
   }

   return searchPromise;
}

RefPtr<GenericPromise>
nsNNTPProtocol::SendSearchCommand(const nsACString &searchString,
     nsIMsgSearchSession *searchSession)
{
   return SendCommand({}, "%s",
PromiseFlatCString(searchString).get())->Then(GetCurrentThreadSerialEventTarget(),
__func__,
       [this, searchSession](NNTPResponse response) {
         if (response.first() != MK_NNTP_RESPONSE_XPAT_OK) {
           AlertError(response.second());
           return GenericPromise::CreateAndReject(NS_ERROR_FAILURE,
__func__);
         }

         return ReadDotStuff([searchSession] (nsCString line) {
           nsresult rv;
           int64_t articleNumber = line.ToInteger64(&rv);
           nsCOMPtr <nsIMsgSearchAdapter> searchAdapter;
           searchSession->GetRunningAdapter(getter_AddRefs(searchAdapter));
           if (NS_SUCCEEDED(rv) && searchAdapter)
             searchAdapter->AddHit((uint32_t) articleNumber);
           return rv;
         });
       }, [](nsresult rv) { return GenericPromise::CreateAndReject(rv,
__func__); });
}

(Search was the first thing I did, before I gave up on actually trying
to replace all the m_* helper functions).

>
>> There are 3 reasons for this:
>>
>> 1. I wanted to see just how amenable C++ was to implementing promises.
>> 2. I want to move all of the protocol implementations off-main-thread.
>> 3. I want to stop using URLs to drive the state machine and actually
>>     use more direct function calls. (Seriously, look up how we do
>>     server-side search on NNTP to understand some of the insanity that
>>     can get exposed).
>
> I'd love to get the chat protocol implementations off the main thread as
> well, but our socket code is completely in JavaScript, so not sure how
> feasible that is.
>
> (Would be nice to have some telemetry on this...)

We could say that about a lot of stuff...

> My big concerns here would be hooking any of this up to Necko and/or
> NSS. Does Mozilla do any socket code in Firefox using Rust?

My first thought was that they do not. But when I was trying to debug a
threading issue, I noticed we had tokio actually running in libxul, and
sleuthing found that we're using it in media/audioipc. This does appear
to be Unix domain sockets, on closer inspection.

It would definitely be worth trying to see what Mozilla's take on
hooking up Necko/NSS to Rust for async I/O would be. It might not be a
bad idea to bypass Necko if possible--a fair amount of the interfaces
are pretty designed to work in the mindset of "I need to display a
webpage," which isn't particularly valuable for mailnews protocols.

> Preaching to the choir here. Dealing with UTF-8 data actually isn't
> terrible, but dealing with binary data is pretty rough. I think you
> skipped an important benefit here though, which is that JavaScript has
> built-in support for Promises + lots of syntactic sugar around them
> (async functions) and an ecosystem to write tests easily (using add_task
> for xpcshell).

Rust has cargo test which is easier than xpcshell or mocha for doing
test development. It doesn't have async/await on stable yet, though it
is on nightly builds and is expected "soon" (apparently 2019 timeframe).

> This is mostly an unrelated thought, but do you have any thoughts on
> whether it would be beneficial to implement some of the core parsing /
> handling / whatever as essentially an external library? This might not
> be possible initially, but separating our state machines from I/O as
> much as possible would allow a couple of things:
>
> * Easier to eventually move external to the code-base and treat as a
> dependency. (This may or may not be a good thing, depending who you ask.)
> * Easier to test since you do not need mock-servers.
>
> It has been on my long-term to do list to perform this change to our IRC
> code. See [1] for some thoughts on separating I/O from protocol parsing
> related to Python + HTTP/2.

If you asked me 5 years ago, I would have said it was very beneficial to
make easily reusable libraries. Today, I'm not so sure. POP, SMTP, and
NNTP [1] are very simple protocols themselves; from a client's
perspective, parsing them amounts to converting byte streams to line
streams, dot-stuffing or unstuffing as appropriate, and maybe splitting
on whitespace and converting some fields to integers. The difficulty in
handling these protocols is essentially all in MIME [2], or in the
higher-level interpretation of what one should be doing (This is
particularly true for IMAP, where trying to figure out how message
sequence numbers map to the headers themselves is a recipe for bugs).

One of the issues with trying to keep I/O separate from the protocol
details is that your I/O actually sort of determines what you want to
have the streams of inputs and streams of outputs be--this is the
problem I ran into with jsmime. I do have a pure-protocol SASL
implementation kicking about somewhere that I can easily adapt as needed
[3].

My start at figuring out what a Rust implementation for NNTP would look
like does involve abstracting the I/O out into a Socket: AsyncRead +
AsyncWrite trait, and I did mocking with a simple implementation that
explicitly lists the expected packets in the expected order [4]. It's
not completely abstracting away I/O in the sense of "just feed me bytes
and I'll feed you a stream of data;" there is a definite bias towards a
asynchronous view of the world. This implementation is also not directly
being driven by nsNntpService, I think, although I am still working on
what it looks like.

[1] IMAP is very deliberately left off this list.
[2] Unfortunately the prospects for reusing the current MIME parsers
off-main-thread are bleak, and I can already hear Jorg groaning.
[3] SASL is easy: it's explicitly described as "server sends X, client
sends Y, server sends Z, etc," so the API is bytes sasl.step(bytes), or
Promise<bytes> if you're relying on WebCrypto, which only gives async
results.
[4] Rust makes it easier to implement traits than XPIDL does. The
implementation for AsyncRead here in its entirety is:
impl Read for MockSocket {
     fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
         let command = self.next_command();
         let data = match command {
             MockData::Server(data) => data,
             MockData::Client(_) =>
                 panic!("Client requested data the server hasn't sent")
         };
         assert!(buf.len() > data.len(), "Too much data being returned");
         buf[0..data.len()].copy_from_slice(data);
         Ok(data.len())
     }
}
impl AsyncRead for MockSocket {}

Yeah, AsyncRead's implementation is empty; a definition is filled in
based entirely on the synchronous read implementation.

>
> --Patrick
>
> [1]
> https://pyvideo.org/pycon-us-2016/cory-benfield-building-protocol-libraries-the-right-way-pycon-2016.html 
>

_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

Philipp Kewisch-2
In reply to this post by Patrick Cloke-3
On 2/25/19 9:48 PM, Patrick Cloke wrote:
>> JS: If networking and parsing is Rust's forté, it is the bane of JS's
>> existence. <...snip...>
>
> Preaching to the choir here. Dealing with UTF-8 data actually isn't
> terrible, but dealing with binary data is pretty rough. I think you
> skipped an important benefit here though, which is that JavaScript has
> built-in support for Promises + lots of syntactic sugar around them
> (async functions) and an ecosystem to write tests easily (using add_task
> for xpcshell).

Not that this makes it a great way to do networking, but what about use
of ArrayBuffers and friends? Doesn't this allow handling binary data in
networking with Javascript?

Philipp
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

Patrick Cloke-3
On 2/27/19 5:38 AM, Philipp Kewisch wrote:

> On 2/25/19 9:48 PM, Patrick Cloke wrote:
>>> JS: If networking and parsing is Rust's forté, it is the bane of JS's
>>> existence. <...snip...>
>>
>> Preaching to the choir here. Dealing with UTF-8 data actually isn't
>> terrible, but dealing with binary data is pretty rough. I think you
>> skipped an important benefit here though, which is that JavaScript has
>> built-in support for Promises + lots of syntactic sugar around them
>> (async functions) and an ecosystem to write tests easily (using add_task
>> for xpcshell).
>
> Not that this makes it a great way to do networking, but what about use
> of ArrayBuffers and friends? Doesn't this allow handling binary data in
> networking with Javascript?
>
> Philipp

You can use ArrayBuffers, yes. The big problem with them is when
accessing the ArrayBuffer using a TypedArray (e.g. Int8Array and
friends) you cannot control the endianess you're accessing the data as
(it is always in the platform endianess). The suggested way to handle
this is to use a DataView [1], but this can be significantly more
awkward to use, depending on the use case.

I actually wrote a rant about this in 2012 [2], which I'm somewhat
embarrassed to share, but the gist of it is still true. The problem *is*
now documented though, so that's good! I also wrote a bunch of helpers
to work with ArrayBuffers [3] (which look like they could use a lot more
documenting...but pretty much you end up building out a bunch C-like
functions for copying data around).

I think this fits into what Joshua was saying in his original post
though -- you can definitely do it in JavaScript, but the ergonomics
aren't great.

--Patrick

[1] See the opening paragraph of
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Int32Array 
and
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DataView#Endianness
[2] https://patrick.cloke.us/posts/2012/11/28/javascript-typed-arrays-pain/
[3]
https://hg.mozilla.org/comm-central/file/tip/chat/modules/ArrayBufferUtils.jsm
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

ISHIKAWA,chiaki
In reply to this post by Joshua Cranmer 🐧
On 2019年02月23日 15:06, Joshua Cranmer 🐧 wrote:

> Over the past week, I've been poking at porting NNTP to use a promise-based
> approach instead of the protocol state machine that it presently uses. There
> are 3 reasons for this:
>
> 1. I wanted to see just how amenable C++ was to implementing promises.
> 2. I want to move all of the protocol implementations off-main-thread.
> 3. I want to stop using URLs to drive the state machine and actually
>    use more direct function calls. (Seriously, look up how we do
>    server-side search on NNTP to understand some of the insanity that
>    can get exposed).
>
> (Using URLs is particularly dumb for POP and SMTP since, well, there's no
> actual URL scheme that could be conceivably used. news: URIs do conceivably
> exist and actually make sense to implement as a URI handler).
>
> As of right now, I've got a proof-of-concept patch that successfully rips
> out most of the protocol state machine handling for promises. It's currently
> at about a net deletion of 2000 lines of code, as it turns out that the
> handling of multiline responses is quite verbose and repeated in several
> different places. What I have right now satisfies only the first goal of the
> process, and doesn't directly make any headway into the second and third goals.
>
> POP, IMAP, SMTP, and NNTP all share some broadly similar characteristics as
> protocols: they are a UTF-8 control channel multiplixed with 8-bit binary
> data in the same socket. Furthermore, STARTTLS is far more common in these
> protocols than other ones such as HTTP, which means it tends to be harder to
> find support in canned socket implemenatations. POP, SMTP and NNTP also very
> much share alignment in their commands, relying heavily on dot-stuffing and
> three-digit response codes, whereas IMAP relies on binary blobs of exact
> count and a tagged and complex parsing structure for commands; it's sort of
> the odd man out. While Thunderbird doesn't itself support these protocols,
> MANAGESIEVE and IRC are also both superficially similar in being a
> text-centric rather than binary protocol, although I think MANAGESIEVE is
> purely UTF-8 and IRC is in the charset hell of "guess what the server does."
> It's for these reasons that I've often thought about implementing all of
> these protocols in terms of a common "mail socket" API, although the exact
> structure of the API I've vacillated on for a while.
>
> Having done the current work on NNTP, I'm at a crucial juncture--trying to
> decide what language to actually implement the protocol core in. There are
> arguments to be made in choosing which language to write:
>
> Rust: Networking protocols and parsing are the forté of Rust, and absent any
> other concerns, writing this code in Rust would probably be the best choice.
> Mozilla as of late appears to be of the opinion that this is the sort of
> code that should be written in Rust rather than C++ or JS. But mailnews code
> does not have any Rust code at present, and I suspect that the number of
> developers who understand it well enough to maintain it, perhaps even an
> emergency basis, is rather small. The integration of system logic between
> Rust and C++, let alone JS, is also an issue: using Rust's native networking
> API (or async network APIs such as tokio) is not viable because of the
> issues with integrating with SSL sockets, and handling the communication
> between the protocol implementation core and the rest of the codebase is a
> path less traveled. XPIDL bindings for Rust do exist, but it's unclear what
> commitment Mozilla has or does not have to them.
>
> JS: If networking and parsing is Rust's forté, it is the bane of JS's
> existence. JS makes a strong distinction between binary data and strings
> which makes it cumbersome to implement this functionality. Furthermore, the
> options for off-main-thread work in JS are grim, especially when it comes to
> setting up the sockets. There's also the drawbacks of JS not having static
> errors and the propensity to swallow important-for-developers errors which
> makes writing robust code here challenging. The only real advantage JS has
> is... it makes it easy for extensions to use, which is probably less true
> these days given the trashing that Mozilla has done to extension APIs.
>
> C++: What C++ has to recommend it is that it is the least work.
> nsNNTPProtocol and nsMsgProtocol already form a solid foundation for most of
> the socket work, and it's very likely that even if the protocol
> implementations themselves were ported to a different language, the sockets
> would still be constructed and managed in C++ instead. But the big
> disadvantage is that C++ is by far the least ergonomic language to use. It
> doesn't have async/await coroutines (unlike JS or, very shortly if not
> already, Rust), and it is quite clear that there are safety coding issues
> when attempting to use lambdas for promise callbacks: my prototype
> implementation does have some nullptr dereferences because I've been mixing
> utility objects as reusing member variables in nsNNTPProtocol versus being
> passed in as arguments to promise constructors.
>
> So at the end of the day, it looks like there are three bad choices here:
> choose a language that's well-suited to the task, but is poorly integrated
> into the environment; choose a language that's mediocre in ergonomics but
> probably fails to meet all the objectives; or choose a language that's
> poorly-suited to the task, but minimizes porting effort and risk.
>
> Another thing I've been thinking about is how to extend the API for the
> message database to allow for use off-main-thread. Such an API could grow to
> be an effective asynchronous database API that would allow the replacement
> of our database with one that doesn't suck, but it would have to rely on
> proxying to the current implementation. The needs of the protocol
> implementations would be a good way to start framing some of the necessary
> methods.
>
> Thoughts/comments/questions/concerns?
>

The following may be orthogonal to your concerns right now, but have you
paid attention to use the framework for creating protocol handler from a
formal specification
instead of hand-written code?

When I say framework, I am thinking of a framework of  generating a parser
from a formal language grammar for a computer language, for example.
There are tools available for it, YACC and Bison are such tools.
(I think there is a version that generates C++ code.)

We can create a protocol handler from a formal specification of
communication handler, too.

Generally speaking, I have found the hand-written code for protocol handling is
1 - often error-prone (there is no guarantee that the code implements the
protocol correctly),
2 - not quite straightforward to understand, and
3 - hard to follow for enhancement or modification  (especially for error
handling)

For point 3, often times, the protocol is extended after a wide usage, but
it may not be clear how to change then existing hand-written code (this is
coupled with 1 above).

Actually, I thought I would rewrite POP3 handler (and eventually IMAP) to
use such framework since I find it rather hard to follow the current code.
Most importantly the error handling framework is not quite clear (and
actually, the current protocol handler doesn't handler errors very well and
hides them to my horror).

There are ways to use YACC/Bison to handle simple communication protocols:
they key is to write low-level routines to recognize "tokens" in the case of
communications. "Tokens" are typically keywords, IDs, numbers, strings and
delimiters in the case of programming language parsing.
We have to be creative to recognize such "tokens" in case of communication
protocols.

So the whole protocol handler PLUS  user actions look like this.


  Lexer (a la Token Recognizer)  ---> checks the arrival or sending of packets.
  +
  Automatically generated protocol handler
                   |
                  +---> call user-defined actions.


But the benefit is there once such a framework is used.
- We are sure that the protocol handler implements the intended protocol
correctly.
- The logic of following the protocol at higher-level and the lower-level
details are neatly separated.
- The error handling mechanism (often tied with how the generator implements
such error recovery) is
  well documented and easy to understand
 (unless the error handler written for each error case is written in a
convoluted manner),
- Modification to meet the protocol change in the future is often a piece of
cake.

Some argue the disadvantage of inefficiency of generated code. But actually
it is negligible.
The only time such an inefficiency is a problem is an oft-used compiler (GNU
CC's C++ front end used to be written using such parser generator framework,
but was rewritten using hand-written front end because of cited
inefficiency. I am not sure if that was a wise decision, but given enough
man-power the support of future grammar/semantics change of C++ was deemed
tenable for the hand-written parser.)
Automatically generated protocol handler has a small driver that takes care
of the states and state transition, and from it, user-written code is
invoked. The state change is only a part of the whole processing and thus
the "the small driver" only spends a tiny amount of whole processing time of
a communication protocol handler.

My take is that such an approach will be a winner for long-term "maintenance
cost" alone for TB.

Regarding the use of parser generators, I have written a few compilers and
other language tools using Bison/Yacc and similar tools over the years.
I wrote such an automatically generated protocol handler for my friend
during graduate school many years ago which amused my friend and his supervisor.
Of course, these days there ARE such generator for communication protocol
handler, I think.

However, given that most of the high-level communication protocols that TB
handles are specified in extended BNF in IETF RFCs, I think we can still use
the Bison/Yacc or enhanced friends for POP3, IMAP, etc. effectively.

In the context of TB, It is just that I am not quite sure

- if there are versions of these generators for
   - rust,
   - JavaScript, or for that matter,
   (It will NOT be that difficult to modify Bison-like tools to generate
code for rust/javascript ONCE the
running environment is clearly specified: to make the educated decision on
this may take a while, though.)

- if a generator exists that works well with callbacks as may be necessary.
  POP3 in TB seems to be implementable without callbacks, if I am not
mistaken, but I may be wrong.
  (In that case, a separate instance of an automatically generated parser is
created for pop3 handling against each server.)

At least, with a framework which I have outlined, I think we have to be
honest, or forced to be honest regarding how to handle a very long line in
IMAP which has caused some bustages before (and I am not even sure if the
current code is OK or not).
There is a bugzilla entry, but right now, the long line support is disabled
by default the last time I checked. My approach would rewrite this part as
the "token" recognizer needs to be rewritten anyway.

BTW, the promise approach and callback suggests that your approach may
invoke the gathering for lookahead symbol for parsing (in the case of
language parsing) in advance and see which "token" is collected from the
input source and the proper callback is invoked based on the input token
(and depending on the internal state: but the callback in your approach
would have been changed at each callback firing to reflect the underlying
state change, I suppose.)

Just my two cents worth.

Chiaki

PS: Last week, I met an old friend from high school days who says his old
Sony VAIO computer feels like a new computer after and OS re-install and he
ditched Outlook and instead uses TB, and also ditched Explorer for FF. Well,
one more reason to support TB earnestly.
Of course, I am a big fan of TB myself.

Unfortunately, I have not been able to submit TB build to mozilla
compilation farm successfully after the big file directory layout change
last April or so. I probably need to spend a few days of serious debugging
to figure the problem out.
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

ISHIKAWA,chiaki
In reply to this post by Patrick Cloke-3
On 2019年02月26日 05:48, Patrick Cloke wrote:

> This is mostly an unrelated thought, but do you have any thoughts on whether
> it would be beneficial to implement some of the core parsing / handling /
> whatever as essentially an external library? This might not be possible
> initially, but separating our state machines from I/O as much as possible
> would allow a couple of things:
>
> * Easier to eventually move external to the code-base and treat as a
> dependency. (This may or may not be a good thing, depending who you ask.)
> * Easier to test since you do not need mock-servers.
>
> It has been on my long-term to do list to perform this change to our IRC
> code. See [1] for some thoughts on separating I/O from protocol parsing
> related to Python + HTTP/2.
>
> --Patrick
>
> [1]
> https://pyvideo.org/pycon-us-2016/cory-benfield-building-protocol-libraries-the-right-way-pycon-2016.html

This might have a lot in common of my idea of using parser/protocol handler
generator idea posted in a separate message.

Except that the generator requires to fill in user-actions in an input file
to the generator more or less and so it does not quite fits with the usage
pattern of a simple library usage.

I watched the video of [1] in youtube.
It was mentioned that I/O ought to be separated to fit the needs of each
application.
I agree and basically the lexer in my diagram separates the I/O on the input
side. We can certainly define the proper output function on the output side
that is used for user-defined actions or even as part of the protocol
handler actions.
So the I/O is separated from the protocol handling.


>   Lexer (a la Token Recognizer)  ---> checks the arrival or sending of packets.
>   +
>   Automatically generated protocol handler
>                    |
>                   +---> call user-defined actions.

Well, maybe I should think of introducing the idea  more earnestly once the
current heap of patches are sent after the build submission works again.
(But of course, if Joshua's idea has find more traction, I need to figure
out if my approach would be useful in one of the protocols such as POP3, not
to step on his newer patches.)

My idea of introducing such approach was to test the protocol handler,
especially for error handling: at least POP3 error handling was not quite
good when an input/output error occurs. (This is in a sense an error for a
"token" prefetch in my language parser analogy.
The current POP3 code does not handle such error gracefully and does not
report it to the user.)

Chiaki


_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

tanstaafl-2
On 2/28/2019, 5:29:18 AM, ishikawa <[hidden email]> wrote:
> Well, maybe I should think of introducing the idea  more earnestly once the
> current heap of patches are sent after the build submission works again.
> (But of course, if Joshua's idea has find more traction, I need to figure
> out if my approach would be useful in one of the protocols such as POP3, not
> to step on his newer patches.)

Or, this sounds like an excellent opportunity to implement an entirely
new protocol call JMAP, which is actually hoped to eventually replace
IMAP entirely anyway.

Yes, it is my bug, so yes, I'm biased.

Dovecot is working on a server side implementation, and I believe
Fastmail and Cyrus both have working code.

https://bugzilla.mozilla.org/show_bug.cgi?id=1322991
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

Joshua Cranmer 🐧
In reply to this post by ISHIKAWA,chiaki
On 2/28/19 3:07 AM, ishikawa wrote:
> The following may be orthogonal to your concerns right now, but have
> you paid attention to use the framework for creating protocol
> handler from a formal specification instead of hand-written code?

As I said to Patrick, NNTP, POP, and SMTP are so simple that there's not
really much parsing to be had. The most complex parsing code I have for
NNTP right now is parsing response codes:
     fn parse<B: AsRef<[u8]>>(data: B) -> Result<Self, Error> {
         let string = std::str::from_utf8(data.as_ref())
             .map_err(|_| Error::new(ErrorKind::Parse,
                                     "Response is not UTF-8"))?
             .trim_end();

         let mut pieces = string.splitn(2, |ch| ch == ' ');
         let code_as_str = pieces.next().unwrap();
         let code = u16::from_str_radix(code_as_str, 10)
             .map_err(|_| Error::new(ErrorKind::Parse,
                                     "Response code is not an integer"))?;
         if code_as_str.len() != 3 || code_as_str.starts_with("+") {
             return Err(Error::new(ErrorKind::Parse,
                                   "ResponseCode is not an integer"));
         }

         let msg : String = pieces.next().unwrap_or("").into();
         match code {
             // 400 => service not available or no longer available
(server immediately closes the
             //     connection.
             401 => Err(Error { kind: ErrorKind::NeedsExtension, msg }),
             // 403 => internal fault or problem preventing action from
being taken
             480 => Err(Error { kind: ErrorKind::AuthRequired, msg }),
             483 => Err(Error { kind: ErrorKind::NeedsTLS, msg }),
             500 => Err(Error { kind: ErrorKind::CommandNotSupported,
msg }),
             501 => Err(Error { kind: ErrorKind::CommandNotSupported,
msg }),
             // 502 => what?
             503 => Err(Error { kind: ErrorKind::FeatureNotSupported,
msg }),
             504 => Err(Error { kind: ErrorKind::Parse, msg }),
             // 4xx or 5xx: some general kind of error.
             400...599 => Err(Error { kind: ErrorKind::Generic(code),
msg }),
             // Anything else: code is not an error.
             _ => Ok(Self(code, msg))
         }
     }

Even most of that is just mapping specific error codes to specific error
kinds and would disappear in a C++ or JS implementation that was less
friendly to this sort of thing.

Note, once again, that I specifically omit IMAP from this list: IMAP
does have a notoriously more complex protocol that would benefit from
having actual lexing and parsing applied to it rather than an ad-hoc
approach.

> Generally speaking, I have found the hand-written code for protocol
> handling is 1 - often error-prone (there is no guarantee that the
> code implements the protocol correctly), 2 - not quite
> straightforward to understand, and 3 - hard to follow for
> enhancement or modification  (especially for error handling)

Yeah, error handling *really* suffers in the state-machine model. I'm
shocked at just how much there is wrong in the NNTP state machine.

> There are ways to use YACC/Bison to handle simple communication
> protocols: they key is to write low-level routines to recognize
> "tokens" in the case of communications. "Tokens" are typically
> keywords, IDs, numbers, strings and delimiters in the case of
> programming language parsing. We have to be creative to recognize
> such "tokens" in case of communication protocols.

The tokio framework tends to prefer to describe connections in terms of
codecs so that input and output streams of bytes instead become streams
of objects in the vein of what you speak. But--again, with the notable
exception of IMAP--for the protocols in question, that tends to force
you back into a state machine situation. To properly decode whether or
not the response should be, say, a GROUP response or a HEAD response,
you have to know which command you just sent. It's a *little* better
because the state machine is contained only within the parser, and
doesn't involve the entire driving logic in the state machine, but
you're still resorting to state machines.

Instead, the design I've been exploring has been to instead think of the
socket as merely a dispatcher of commands. Each command is described by
an execute function that describes how it sends and receives data on the
socket, or possibly even do things such as enable TLS on the socket.
Essentially, a command ends up looking like this:

async fn send_group(self, socket: &mut Socket)-> Result<ArticleRange> {
   await socket.send_line("GROUP {}".format(self.name))?;
   let response = parse_response(await socket.read_line()?)?;
   if response.code() != 211 {
     return Err(ErrorKind::NoGroup);
   }
   // Group is the only command whose message result has to actually be
   // parsed, eliding those details here...
   parse_group_response(response.message())
}

With a bit more infrastructure, it's even possible to handle pipelining.
On the other hand, the approach doesn't naively scale to IMAP (which is
probably the only protocol to really benefit from pipelining in a major
way, although NNTP could benefit from it if we fall back to HEAD logic).

This still achieves the separation between protocol parsing and business
end logic (although not quite the separation between protocol and I/O
that can be done, but I'd argue it gets close enough if you design that
Socket class right), and it keeps the improved maintainability of
coroutine-based designs. (And using Rust or JS essentially makes error
propagation the default course of action).

> - The error handling mechanism (often tied with how the generator
> implements such error recovery) is well documented and easy to
> understand (unless the error handler written for each error case is
> written in a convoluted manner),

You're probably the first person I've seen to argue that automatic
parser generators make error handling *easier*, as they're notorious for
being complete trash at error recovery. Admittedly, it's less of an
issue if you don't need to try to tell the user what the error actually was.

> My take is that such an approach will be a winner for long-term
> "maintenance cost" alone for TB.

I disagree. Parser generators tend to automate syntax. But my experience
is that most bugs--particularly in the protocols we're talking
about--are a result of semantics. Proper hand-written lexers and parsers
are as effective as parser generators at communicating the grammar they
lex/parse to the programmer and white-box fuzzers [1], and they also
tend to be more effective to debug if the grammar is bad. The problem I
faced at work today was exactly such an issue: I specified a bad pattern
match that produced a syntactically valid but semantically invalid parse
tree in the transformation.

> - if there are versions of these generators for - rust, -
> JavaScript, or for that matter, (It will NOT be that difficult to
> modify Bison-like tools to generate code for rust/javascript ONCE the
>  running environment is clearly specified: to make the educated
> decision on this may take a while, though.)

Integrating Bison into our current build system is going to be annoying;
it's also a slightly more difficult dependency on Windows. I know there
are other generators for Rust; nom is a parser combinator library, and
there's lalrpop which is a more traditional LALR(1) generator, similar
to Bison. Mozilla's Oxidation page links to a whitepaper suggesting to
use nom for parsing, but all of the parsers I could find in
mozilla-central seemed to use hand-rolled parsers.

[1] Probably more, actually. Most whitebox fuzzing techniques tend to
rely on path conditions and control flow to work out how to try to
modify the input to exercise more code paths. A LALR-style parser
generally ends up with a function that amounts to a giant indirect
switch statement and a pending tokens buffer, which are going to be
harder for the SMT solver to resolve into code to push it down to
specific action paths.
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

ISHIKAWA,chiaki
Just a few points:

On 2019/03/01 12:45, Joshua Cranmer wrote:

>> - The error handling mechanism (often tied with how the generator
>> implements such error recovery) is well documented and easy to
>> understand (unless the error handler written for each error case is
>> written in a convoluted manner),
>
> You're probably the first person I've seen to argue that automatic
> parser generators make error handling *easier*, as they're notorious
> for being complete trash at error recovery. Admittedly, it's less of
> an issue if you don't need to try to tell the user what the error
> actually was.
>
My point is that with a particular parser generator, error handling has
to follow a certain preferred pattern that a parser generator implements
and it is easier to understand for the maintainer than random mechanism
that an original implementer used years ago for a hand-written parser
and others have added undocumented changes.

It is indeed a delicate task to create a grammar that takes care of
error handling in a neat manner for a particular language (this is NOT
impossible: I did it for a few interpreters using Yacc/Lex toolset and a
hand-crafted LALR(1) parser generator for a pascal-like language.).
But it does need a few trials and errors initially before a good balance
of error recovery and good error reporting can be found for a particular
language and after it is exposed to real world error input for sometime.
Actually testing such a parser often deepens the understanding of target
language as well after I have seen my parser blew up [segfaulted] for
cleverly crafted incorrect input.

I think when you say "being complete trash at error recovery", the
problem would be either the particular grammar for a language was not
written down to utilize the error recovery mechanism of the generated
parser runtime effectively, or probably the particular implementation of
a parser does not use some advanced trick for error recovery which are
usually available in such generators and their runtime support.
(Interactive interpreter needed a somewhat different approach than an
ordinary compiler, for example.)

But again, it is hard to figure out the problem of bad error handling of
the framework of  particular parser generator and the generated parser
without a concrete example (the grammar and associated semantic actions).

One thing I like the generator framework is that it forces the develop
to adopts this "grammar and associated semantic actions" and so the
meaningful executable statements are closely grouped and matched with a
particular fragment and so again, to me, it is easier to understand than
a hand-written code generally speaking: please note that I have found
some hand-written parsers easy to read and understand. But that requires
quite a discipline on the developers. Later maintainers can really mess
up the code.
With the generator-framework the code requires a certain discipline to
follow the framework to begin with and later maintainer can mess up this
framework at all.

But again, it is true that one can write an implementation of an
automatically generated parser with bad error recovery and very
error-prone action code.

>> My take is that such an approach will be a winner for long-term
>> "maintenance cost" alone for TB.
>
> I disagree. Parser generators tend to automate syntax. But my
> experience is that most bugs--particularly in the protocols we're
> talking about--are a result of semantics. Proper hand-written lexers
> and parsers are as effective as parser generators at communicating the
> grammar they lex/parse to the programmer and white-box fuzzers [1],
> and they also tend to be more effective to debug if the grammar is
> bad. The problem I faced at work today was exactly such an issue: I
> specified a bad pattern match that produced a syntactically valid but
> semantically invalid parse tree in the transformation.

I agree that such semantic issues are the bugs which we face indeed.
But that is true for  BOTH  hand-written parsers and automatically
generated parsers.

The way people write grammars may vary among implementers for the same
language, and it is indeed the semantical part that is error-prone once
the syntactic handling is taken care of.
Again my point is that at least the syntactic handling is taken care of
after proper grammar is written down (say from the description in RFC:
we may need to tweak it in a form that a particular parser generator can
handle.) so that we can focus on the semantic processing that is left
behind with the solid knowledge that syntactic part is taken care of.
[There are many codes in TB/FF where syntactic correctness of input
stream is not checked very well. The code to read JSON files from the
file stream did not even seem to bother return low-level I/O erroor code
when I checked it a few years ago. I am not entirely sure what it
returns when a file is cut half in the middle at random place. No wonder
TB behaves strangely when the parameter settings are read from a broken
JSON config file (corrupt due to disk hardware error, or I/O error
caused by network issue in the case of network-mounted file system such
as CIFS/Samba, NFS, etc.) I think when we see a corrupt JSON file where
the options are stored, we should quit TB before damage is done to
existing folders. All bets are off in such a case. ]

To me that syntactic checking is taken care of automatically alone
is a benefit when the protocol or the language may change over the long run.

That said, I suppose POP3, NNTP, etc. won't change much.
But rewriting protocol handler using a framework of automatically
generated parser expose (or rather forces us to expose) the currently
not so well documented error handling (which may not exist in some
phases processing as I found out.), and expose the fundamental algorithm
of the protocol handling which is not so obvious in the legacy code of
TB today.
Yes, IMAP which is not your current target  may benefit most in that sense.

I will look at the mentioned parser generator for rust. That may look
promising.
However, I wonder what would be the language of choice for TB since most
of the low-level parser stuff is in C++ if I am not mistaken.
If there is a consensus to write high-level parsing stuff in a higher
level language such as rust (and maybe C++ code acts like a helper),
that is certainly a possible approach although the rewriting effort will
be very large indeed and affect many existing patches (maybe).

POP3 as of now seems to depend heavily C++ code for the simple parsing,
but it is superficially simple and forgets about (proper) error
processing when the low-level I/O reports file system errors.
To be honest, since the original code seems to have been written
assuming no low-level I/O error ever occurs, I have no idea what the
"proper" error processing ought to be in the current legacy code in many
places.
Rewriting the parser forces us to think about error recovery in POP3 at
least. From what I have seen/read in the past bug reports, IMAP is no
better.

Chiaki


_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird
Reply | Threaded
Open this post in threaded view
|

Re: Porting protocol parsing to newer coding idioms

Joshua Cranmer 🐧
On 3/1/19 5:26 AM, ISHIKAWA,chiaki wrote:
> Again my point is that at least the syntactic handling is taken care of
> after proper grammar is written down (say from the description in RFC:
> we may need to tweak it in a form that a particular parser generator can
> handle.) so that we can focus on the semantic processing that is left
> behind with the solid knowledge that syntactic part is taken care of.

Realistically, syntactic errors are not an issue for a mail client.
You're not communicating with untrusted servers, or even with servers in
a potentially attacker-malleable stream (as the stream is protected by
TLS). The realistic errors are underlying I/O errors and maybe charset
confusion issues, both of which are orthogonal to how the results are
parsed.

> To me that syntactic checking is taken care of automatically alone
> is a benefit when the protocol or the language may change over the long
> run.

But the addition of parser generator steps in the toolchain has a pretty
steep cost associated with it. On a cost-benefit analysis, I just don't
see the benefits outweighing the costs.

> However, I wonder what would be the language of choice for TB since most
> of the low-level parser stuff is in C++ if I am not mistaken.
> If there is a consensus to write high-level parsing stuff in a higher
> level language such as rust (and maybe C++ code acts like a helper),
> that is certainly a possible approach although the rewriting effort will
> be very large indeed and affect many existing patches (maybe).

Indeed, that's one of my goals in starting this thread: to see what the
thought was on moving C++ code to Rust instead of JS. I still haven't
made up my mind as to what language things will eventually be written in.

> Rewriting the parser forces us to think about error recovery in POP3 at
> least. From what I have seen/read in the past bug reports, IMAP is no
> better.

 From my experience, the big benefit is ripping out the state machine
and replacing it with promises, even without async/await. (That's the
thing I've already done with NNTP, although it's too unstable for
production yet--the interaction of the promise infrastructure with
networking is problematic). After that, the next big benefit I think is
ripping URLs out of the process, followed by enforcing a better
separation between protocol parsing and mailnews handling of the results.
_______________________________________________
dev-apps-thunderbird mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-apps-thunderbird