Discussion:
[DISCUSS] Hadoop RPC encryption performance improvements
Wei-Chiu Chuang
2018-10-25 18:04:39 UTC
Permalink
Folks,

I would like to invite all to discuss the various Hadoop RPC encryption
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in terms
of number of RPCs per second, around 15~20% of the one without SASL)

There have been some attempts to address this, most notably, HADOOP-10768
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize Hadoop RPC
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing Hadoop RPC
using SSL). But it looks like both attempts have not been progressing.

During the recent Hadoop contributor meetup, Daryn Sharp mentioned he's
working on another approach that leverages Netty for its SSL encryption,
and then integrate Netty with Hadoop RPC so that Hadoop RPC automatically
benefits from netty's SSL encryption performance.

So there are at least 3 attempts to address this issue as I see it. Do we
have a consensus that:
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
Wei-Chiu Chuang
2018-10-31 13:43:07 UTC
Permalink
Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which approach
to go with.

Otherwise I'll pick HADOOP-10768 since it's ready for commit, and I've
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC encryption
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in terms
of number of RPCs per second, around 15~20% of the one without SASL)
There have been some attempts to address this, most notably, HADOOP-10768
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize Hadoop RPC
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing Hadoop RPC
using SSL). But it looks like both attempts have not been progressing.
During the recent Hadoop contributor meetup, Daryn Sharp mentioned he's
working on another approach that leverages Netty for its SSL encryption,
and then integrate Netty with Hadoop RPC so that Hadoop RPC automatically
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it. Do we
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
Daryn Sharp
2018-10-31 14:39:37 UTC
Permalink
Various KMS tasks have been delaying my RPC encryption work – which is 2nd
on TODO list. It's becoming a top priority for us so I'll try my best to
get a preliminary netty server patch (sans TLS) up this week if that helps.

The two cited jiras had some critical flaws. Skimming my comments, both
use blocking IO (obvious nonstarter). HADOOP-10768 is a hand rolled
TLS-like encryption which I don't feel is something the community can or
should maintain from a security standpoint.

Daryn
Post by Wei-Chiu Chuang
Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which approach
to go with.
Otherwise I'll pick HADOOP-10768 since it's ready for commit, and I've
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC encryption
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in
terms
Post by Wei-Chiu Chuang
of number of RPCs per second, around 15~20% of the one without SASL)
There have been some attempts to address this, most notably, HADOOP-10768
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize Hadoop
RPC
Post by Wei-Chiu Chuang
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing Hadoop
RPC
Post by Wei-Chiu Chuang
using SSL). But it looks like both attempts have not been progressing.
During the recent Hadoop contributor meetup, Daryn Sharp mentioned he's
working on another approach that leverages Netty for its SSL encryption,
and then integrate Netty with Hadoop RPC so that Hadoop RPC automatically
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it. Do we
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
--
Daryn
Konstantin Shvachko
2018-11-02 02:14:00 UTC
Permalink
Hi Wei-Chiu,

Thanks for starting the thread and summarizing the problem. Sorry for slow
response.
We've been looking at the encrypted performance as well and are interested
in this effort.
We ran some benchmarks locally. Our benchmarks also showed substantial
penalty for turning on wire encryption on rpc.
Although it was less drastic - more in the range of -40%. But we ran a
different benchmark NNThroughputBenchmark, and we ran it on 2.6 last year.
Could have published the results, but need to rerun on more recent versions.

Three points from me on this discussion:

1. We should settle on the benchmarking tools.
For development RPCCallBenchmark is good as it measures directly the
improvement on the RPC layer. But for external consumption it is more
important to know about e.g. NameNode RPCs performance. So we probably
should run both benchmarks.
2. SASL vs SSL.
Since current implementation is based on SASL, I think it would make sense
to make improvements in this direction. I assume switching to SSL would
require changes in configuration. Not sure if it will be compatible, since
we don't have the details. At this point I would go with HADOOP-10768.
Given all (Daryn's) concerns are addressed.
3. Performance improvement expectations.
Ideally we want to have < 10% penalty for encrypted communication. Anything
over 30% will probably have very limited usability. And there is the gray
area in between, which could be mitigated by allowing mixed encrypted and
un-encrypted RPCs on the single NameNode like in HDFS-13566.

Thanks,
--Konstantin
Post by Daryn Sharp
Various KMS tasks have been delaying my RPC encryption work – which is 2nd
on TODO list. It's becoming a top priority for us so I'll try my best to
get a preliminary netty server patch (sans TLS) up this week if that helps.
The two cited jiras had some critical flaws. Skimming my comments, both
use blocking IO (obvious nonstarter). HADOOP-10768 is a hand rolled
TLS-like encryption which I don't feel is something the community can or
should maintain from a security standpoint.
Daryn
Post by Wei-Chiu Chuang
Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which
approach
Post by Wei-Chiu Chuang
to go with.
Otherwise I'll pick HADOOP-10768 since it's ready for commit, and I've
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC encryption
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in
terms
Post by Wei-Chiu Chuang
of number of RPCs per second, around 15~20% of the one without SASL)
There have been some attempts to address this, most notably,
HADOOP-10768
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize Hadoop
RPC
Post by Wei-Chiu Chuang
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing Hadoop
RPC
Post by Wei-Chiu Chuang
using SSL). But it looks like both attempts have not been progressing.
During the recent Hadoop contributor meetup, Daryn Sharp mentioned he's
working on another approach that leverages Netty for its SSL
encryption,
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
and then integrate Netty with Hadoop RPC so that Hadoop RPC
automatically
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it. Do
we
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
--
Daryn
Todd Lipcon
2018-11-02 20:20:43 UTC
Permalink
One possibility (which we use in Kudu) is to use SSL for encryption but
with a self-signed certificate, maintaining the existing SASL/GSSAPI
handshake for authentication. The one important bit here, security wise, is
to implement channel binding (RFC 5056 and RFC 5929) to prevent against
MITMs. The description of the Kudu protocol is here:
https://github.com/apache/kudu/blob/master/docs/design-docs/rpc.md#wire-protocol

If implemented correctly, this provides TLS encryption (with all of its
performance and security benefits) without requiring the user to deploy a
custom cert.

-Todd
Post by Konstantin Shvachko
Hi Wei-Chiu,
Thanks for starting the thread and summarizing the problem. Sorry for slow
response.
We've been looking at the encrypted performance as well and are interested
in this effort.
We ran some benchmarks locally. Our benchmarks also showed substantial
penalty for turning on wire encryption on rpc.
Although it was less drastic - more in the range of -40%. But we ran a
different benchmark NNThroughputBenchmark, and we ran it on 2.6 last year.
Could have published the results, but need to rerun on more recent versions.
1. We should settle on the benchmarking tools.
For development RPCCallBenchmark is good as it measures directly the
improvement on the RPC layer. But for external consumption it is more
important to know about e.g. NameNode RPCs performance. So we probably
should run both benchmarks.
2. SASL vs SSL.
Since current implementation is based on SASL, I think it would make sense
to make improvements in this direction. I assume switching to SSL would
require changes in configuration. Not sure if it will be compatible, since
we don't have the details. At this point I would go with HADOOP-10768.
Given all (Daryn's) concerns are addressed.
3. Performance improvement expectations.
Ideally we want to have < 10% penalty for encrypted communication. Anything
over 30% will probably have very limited usability. And there is the gray
area in between, which could be mitigated by allowing mixed encrypted and
un-encrypted RPCs on the single NameNode like in HDFS-13566.
Thanks,
--Konstantin
Post by Daryn Sharp
Various KMS tasks have been delaying my RPC encryption work – which is
2nd
Post by Daryn Sharp
on TODO list. It's becoming a top priority for us so I'll try my best to
get a preliminary netty server patch (sans TLS) up this week if that
helps.
Post by Daryn Sharp
The two cited jiras had some critical flaws. Skimming my comments, both
use blocking IO (obvious nonstarter). HADOOP-10768 is a hand rolled
TLS-like encryption which I don't feel is something the community can or
should maintain from a security standpoint.
Daryn
Post by Wei-Chiu Chuang
Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which
approach
Post by Wei-Chiu Chuang
to go with.
Otherwise I'll pick HADOOP-10768 since it's ready for commit, and I've
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC
encryption
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in
terms
Post by Wei-Chiu Chuang
of number of RPCs per second, around 15~20% of the one without SASL)
There have been some attempts to address this, most notably,
HADOOP-10768
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize
Hadoop
Post by Daryn Sharp
Post by Wei-Chiu Chuang
RPC
Post by Wei-Chiu Chuang
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing
Hadoop
Post by Daryn Sharp
Post by Wei-Chiu Chuang
RPC
Post by Wei-Chiu Chuang
using SSL). But it looks like both attempts have not been
progressing.
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
During the recent Hadoop contributor meetup, Daryn Sharp mentioned
he's
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
working on another approach that leverages Netty for its SSL
encryption,
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
and then integrate Netty with Hadoop RPC so that Hadoop RPC
automatically
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it.
Do
Post by Daryn Sharp
we
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
--
Daryn
--
Todd Lipcon
Software Engineer, Cloudera
Wei-Chiu Chuang
2018-11-02 21:35:15 UTC
Permalink
Thanks all for the inputs,

To offer additional information (while Daryn is working on his stuff),
optimizing RPC encryption opens up another possibility: migrating KMS
service to use Hadoop RPC.

Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
undesirable performance (a few thousand ops per second) compared to
NameNode. Unfortunately for each NameNode namespace operation you also need
to access KMS too.

Migrating KMS to Hadoop RPC greatly improves its performance (if
implemented correctly), and RPC encryption would be a prerequisite. So
please keep that in mind when discussing the Hadoop RPC encryption
improvements. Cloudera is very interested to help with the Hadoop RPC
encryption project because a lot of our customers are using at-rest
encryption, and some of them are starting to hit KMS performance limit.

This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
idea in the meetup and I am very thrilled to see this happening because it
is a real issue bothering some of our customers, and I suspect it is the
right solution to address this tech debt.
Post by Todd Lipcon
One possibility (which we use in Kudu) is to use SSL for encryption but
with a self-signed certificate, maintaining the existing SASL/GSSAPI
handshake for authentication. The one important bit here, security wise, is
to implement channel binding (RFC 5056 and RFC 5929) to prevent against
https://github.com/apache/kudu/blob/master/docs/design-docs/rpc.md#wire-protocol
If implemented correctly, this provides TLS encryption (with all of its
performance and security benefits) without requiring the user to deploy a
custom cert.
-Todd
Post by Konstantin Shvachko
Hi Wei-Chiu,
Thanks for starting the thread and summarizing the problem. Sorry for
slow
Post by Konstantin Shvachko
response.
We've been looking at the encrypted performance as well and are
interested
Post by Konstantin Shvachko
in this effort.
We ran some benchmarks locally. Our benchmarks also showed substantial
penalty for turning on wire encryption on rpc.
Although it was less drastic - more in the range of -40%. But we ran a
different benchmark NNThroughputBenchmark, and we ran it on 2.6 last
year.
Post by Konstantin Shvachko
Could have published the results, but need to rerun on more recent versions.
1. We should settle on the benchmarking tools.
For development RPCCallBenchmark is good as it measures directly the
improvement on the RPC layer. But for external consumption it is more
important to know about e.g. NameNode RPCs performance. So we probably
should run both benchmarks.
2. SASL vs SSL.
Since current implementation is based on SASL, I think it would make
sense
Post by Konstantin Shvachko
to make improvements in this direction. I assume switching to SSL would
require changes in configuration. Not sure if it will be compatible,
since
Post by Konstantin Shvachko
we don't have the details. At this point I would go with HADOOP-10768.
Given all (Daryn's) concerns are addressed.
3. Performance improvement expectations.
Ideally we want to have < 10% penalty for encrypted communication.
Anything
Post by Konstantin Shvachko
over 30% will probably have very limited usability. And there is the gray
area in between, which could be mitigated by allowing mixed encrypted and
un-encrypted RPCs on the single NameNode like in HDFS-13566.
Thanks,
--Konstantin
Post by Daryn Sharp
Various KMS tasks have been delaying my RPC encryption work – which is
2nd
Post by Daryn Sharp
on TODO list. It's becoming a top priority for us so I'll try my best
to
Post by Konstantin Shvachko
Post by Daryn Sharp
get a preliminary netty server patch (sans TLS) up this week if that
helps.
Post by Daryn Sharp
The two cited jiras had some critical flaws. Skimming my comments,
both
Post by Konstantin Shvachko
Post by Daryn Sharp
use blocking IO (obvious nonstarter). HADOOP-10768 is a hand rolled
TLS-like encryption which I don't feel is something the community can
or
Post by Konstantin Shvachko
Post by Daryn Sharp
should maintain from a security standpoint.
Daryn
Post by Wei-Chiu Chuang
Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which
approach
Post by Wei-Chiu Chuang
to go with.
Otherwise I'll pick HADOOP-10768 since it's ready for commit, and
I've
Post by Konstantin Shvachko
Post by Daryn Sharp
Post by Wei-Chiu Chuang
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC
encryption
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
performance improvements. As you probably know, Hadoop RPC
encryption
Post by Konstantin Shvachko
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
currently relies on Java SASL, and have _really_ bad performance
(in
Post by Konstantin Shvachko
Post by Daryn Sharp
Post by Wei-Chiu Chuang
terms
Post by Wei-Chiu Chuang
of number of RPCs per second, around 15~20% of the one without
SASL)
Post by Konstantin Shvachko
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
There have been some attempts to address this, most notably,
HADOOP-10768
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize
Hadoop
Post by Daryn Sharp
Post by Wei-Chiu Chuang
RPC
Post by Wei-Chiu Chuang
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing
Hadoop
Post by Daryn Sharp
Post by Wei-Chiu Chuang
RPC
Post by Wei-Chiu Chuang
using SSL). But it looks like both attempts have not been
progressing.
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
During the recent Hadoop contributor meetup, Daryn Sharp mentioned
he's
Post by Daryn Sharp
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
working on another approach that leverages Netty for its SSL
encryption,
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
and then integrate Netty with Hadoop RPC so that Hadoop RPC
automatically
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it.
Do
Post by Daryn Sharp
we
Post by Wei-Chiu Chuang
Post by Wei-Chiu Chuang
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
--
Daryn
--
Todd Lipcon
Software Engineer, Cloudera
Wei-Chiu Chuang
2018-12-06 06:59:27 UTC
Permalink
Thanks Daryn for your work. I saw you filed an upstream jira HADOOP-15977
<https://issues.apache.org/jira/browse/HADOOP-15977> and uploaded some
patches for review.
I'm watching the jira and will review shortly as fast as I can.

Best
Post by Daryn Sharp
Various KMS tasks have been delaying my RPC encryption work – which is 2nd
on TODO list. It's becoming a top priority for us so I'll try my best to
get a preliminary netty server patch (sans TLS) up this week if that helps.
The two cited jiras had some critical flaws. Skimming my comments, both
use blocking IO (obvious nonstarter). HADOOP-10768 is a hand rolled
TLS-like encryption which I don't feel is something the community can or
should maintain from a security standpoint.
Daryn
Post by Wei-Chiu Chuang
Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which approach
to go with.
Otherwise I'll pick HADOOP-10768 since it's ready for commit, and I've
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC encryption
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in
terms
Post by Wei-Chiu Chuang
of number of RPCs per second, around 15~20% of the one without SASL)
There have been some attempts to address this, most notably,
HADOOP-10768
Post by Wei-Chiu Chuang
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize Hadoop
RPC
Post by Wei-Chiu Chuang
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing Hadoop
RPC
Post by Wei-Chiu Chuang
using SSL). But it looks like both attempts have not been progressing.
During the recent Hadoop contributor meetup, Daryn Sharp mentioned he's
working on another approach that leverages Netty for its SSL encryption,
and then integrate Netty with Hadoop RPC so that Hadoop RPC
automatically
Post by Wei-Chiu Chuang
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it. Do
we
Post by Wei-Chiu Chuang
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
--
Daryn
Erik Krogen
2018-11-01 21:29:03 UTC
Permalink
Hey Wei-Chiu,


We (LinkedIn) are definitely interested in the progression of this feature. Surveying HADOOP-10768 vs. HADOOP-13836, we feel that HADOOP-10768 is a change that is more in-line with Hadoop's progression. For example, it re-uses the existing SASL layer, maintains consistency with the encryption used for data transfer, and avoids the necessity for setting up client key/trust stores. Given that it is such a security-critical piece of code, I think we should make sure to get some additional sets of eyes on the patch and ensure that all of Daryn's concerns are addressed fully, but the approach seems valid.


Though we are interested in the Netty SSL approach, it is very difficult to make any judgements on it at this time with such little information available. How fundamental of a code change will this be? Is it fully backwards compatible? Will switching to a new RPC engine introduce the possibility for a whole new range of performance issues and/or bugs? We can appreciate the point that outsourcing such security-critical concerns to another widely used and battle-tested framework could be a big potential benefit, but are worried about the associated risks. More detailed information may help to assuage these concerns.


One additional point we would like to make is that right now, it seems that different approaches are using different benchmarks. For example, HADOOP-13836 posted results from Terasort, and HADOOP-10768 posted results from RPCCallBenchmark. Clearly the performance of the approach is crucial in making the decision and we should ensure that any comparisons made are apples-to-apples with the same test setup.

Thanks,
Erik Krogen
LinkedIn

________________________________
From: Wei-Chiu Chuang <***@apache.org>
Sent: Wednesday, October 31, 2018 6:43 AM
To: Hadoop Common; Hdfs-dev
Subject: Re: [DISCUSS] Hadoop RPC encryption performance improvements

Ping. Any one? Cloudera is interested in moving forward with the RPC
encryption improvements, but I just like to get a consensus which approach
to go with.

Otherwise I'll pick HADOOP-10768 since it's ready for commit, and I've
spent time on testing it.
Post by Wei-Chiu Chuang
Folks,
I would like to invite all to discuss the various Hadoop RPC encryption
performance improvements. As you probably know, Hadoop RPC encryption
currently relies on Java SASL, and have _really_ bad performance (in terms
of number of RPCs per second, around 15~20% of the one without SASL)
There have been some attempts to address this, most notably, HADOOP-10768
<https://issues.apache.org/jira/browse/HADOOP-10768> (Optimize Hadoop RPC
encryption performance) and HADOOP-13836
<https://issues.apache.org/jira/browse/HADOOP-13836> (Securing Hadoop RPC
using SSL). But it looks like both attempts have not been progressing.
During the recent Hadoop contributor meetup, Daryn Sharp mentioned he's
working on another approach that leverages Netty for its SSL encryption,
and then integrate Netty with Hadoop RPC so that Hadoop RPC automatically
benefits from netty's SSL encryption performance.
So there are at least 3 attempts to address this issue as I see it. Do we
1. this is an important problem
2. which approach we want to move forward with
--
A very happy Hadoop contributor
--
A very happy Hadoop contributor
Loading...