tag:blogger.com,1999:blog-28193643513969898812024-03-07T21:10:07.175-08:00Ted Yu's blogTed Yuhttp://www.blogger.com/profile/08113013531845314169noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-2819364351396989881.post-31230599528682823502013-03-26T07:50:00.001-07:002013-03-26T10:37:13.370-07:00Compactions Q&AOn user mailing list, questions about compaction are probably the most frequently asked.<br />
<br />
I try to summarize some answers below. They're by no means complete.<br />
<br />
<b><span style="background-color: whitesmoke; color: #333333; font-family: Verdana, sans-serif;">How to check if a major_</span><em style="background-color: #ffff99; color: #333333; font-family: Verdana, sans-serif; font-style: inherit;">compact</em><span style="background-color: whitesmoke; color: #333333; font-family: Verdana, sans-serif;"> is done?</span></b><br />
<span style="background-color: whitesmoke; color: #333333; font-family: Verdana, sans-serif; font-size: 12px;">(<a href="http://search-hadoop.com/m/heoc617XV29/otis+compactions&subj=Re+How+to+check+if+a+major_compact+is+done+">http://search-hadoop.com/m/heoc617XV29/otis+compactions&subj=Re+How+to+check+if+a+major_compact+is+done+</a>)</span><br />
<span style="background-color: whitesmoke; font-size: 12px;"><span style="color: #333333; font-family: Verdana, sans-serif;"><br /></span></span>
JMX exposes metric abou<span style="color: #333333; font-family: Verdana, sans-serif;"><span style="font-size: 12px;">t </span>compaction time.</span><br />
<span style="color: #333333; font-family: Verdana, sans-serif;">In </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif;"><a href="https://issues.apache.org/jira/browse/HBASE-6033">HBASE-6033</a>, </span><span style="color: #222222; font-family: arial, sans-serif;">Adding some fuction to check if a table/region is in compaction, the following API was added to HBaseAdmin</span><span style="color: #222222; font-family: arial, sans-serif;">:</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;"></span><br />
<span style="color: #222222; font-family: arial, sans-serif;"> public CompactionState getCompactionState(final String tableNameOrRegionName)</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"> throws IOException, InterruptedException {</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<br />
<span style="color: #222222; font-family: arial, sans-serif;">Here is picture depicting compaction associated with a table.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;"><a href="https://issues.apache.org/jira/secure/attachment/12528264/table_ui.png">https://issues.apache.org/jira/secure/attachment/12528264/table_ui.png</a></span><br />
<span style="color: #222222; font-family: arial, sans-serif; font-size: x-small;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">This feature is in 0.95 and beyond.</span><br />
<span style="color: #222222; font-family: arial, sans-serif; font-size: x-small;"><br /></span><span style="color: #222222; font-family: arial, sans-serif;"><b>Should custom script be written to compact regions one by one ?</b></span><br />
<span style="color: #222222; font-family: arial, sans-serif; font-size: x-small;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">Major compactions are needed if there're many writes / deletions to your table.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">Since command for triggering m</span><span style="color: #222222; font-family: arial, sans-serif;">ajor compaction is asynchronous, compaction storm may result if the commands are not properly issued to the regions (w.r.t. timing). Jean-Daniel suggested compacting subset of the regions at a time.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;">One can monitor compaction queue length on region server using JMX.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;"><b>Are there new algorithms being developed to improve major compaction ?</b></span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">Yes.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;">One of the initiatives is the stripe compaction. See parent JIRA: <a href="https://issues.apache.org/jira/browse/HBASE-7667">HBASE-7667</a></span><br />
<br />
Instead of creating table with large number of small regions, the proposal combines LevelDB ideas with many-region initiative. Basically the key space of one large region is partitioned into multiple sub-ranges which are non-overlapping and contiguous.<br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">Here is the design doc:</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><a href="https://issues.apache.org/jira/secure/attachment/12575449/Stripe%20compactions.pdf">https://issues.apache.org/jira/secure/attachment/12575449/Stripe%20compactions.pdf</a></span><br />
<br />
<b>Another</b> improvement is in <a href="https://issues.apache.org/jira/browse/HBASE-7842">HBASE-7842</a> prior to which b<span style="background-color: white; color: #222222; font-family: arial, sans-serif;">ulk loaded files were not handled correctly by </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif;">the </span><span class="il" style="background-color: #ffffcc; color: #222222; font-family: arial, sans-serif;">compaction</span><span style="background-color: white; color: #222222; font-family: arial, sans-serif;"> selection algorithm. Compacted files are getting bigger and yet still picked up by compaction. This leads to longer and longer compaction time.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;">When all the files are chosen for compaction, minor compaction is promoted to a major compaction.</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;"><b>What are the config parameters that I should watch out ?</b></span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="background-color: white; font-family: monospace; line-height: 15px;">hbase.hstore.compactionThreshold (Note: in 0.95 and beyond, this becomes </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">hbase.hstore.compaction.min)</span><br />
<span style="background-color: white; font-family: monospace; line-height: 15px;">hbase.hstore.compaction.max</span><br />
<span style="background-color: white; font-family: monospace; line-height: 15px;">hbase.hregion.majorcompaction</span><br />
<span style="background-color: white; font-family: monospace; line-height: 15px;">hbase.hstore.blockingStoreFiles</span><br />
<span style="background-color: white; font-family: monospace; line-height: 15px;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">Compaction is closely related to flushing (from memstore):</span><br />
<span style="color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="background-color: white; color: #222222; font-family: arial, sans-serif;">hbase.regionserver.global.</span><wbr style="background-color: white; color: #222222; font-family: arial, sans-serif;"></wbr><span style="background-color: white; color: #222222; font-family: arial, sans-serif;">memstore.lowerlimit</span><br />
<span style="background-color: white; color: #222222; font-family: arial, sans-serif;">hbase.regionserver.global.</span><wbr style="background-color: white; color: #222222; font-family: arial, sans-serif;"></wbr><span style="background-color: white; color: #222222; font-family: arial, sans-serif;">memstore.upperlimit</span><br />
<span style="background-color: white; color: #222222; font-family: arial, sans-serif;"><br /></span>
<span style="color: #222222; font-family: arial, sans-serif;">You can find explanation for the above parameters in </span><a href="http://hbase.apache.org/book.html" style="font-family: arial, sans-serif;">http://hbase.apache.org/book.html</a><span style="color: #222222; font-family: arial, sans-serif;"> </span>Ted Yuhttp://www.blogger.com/profile/08113013531845314169noreply@blogger.com5tag:blogger.com,1999:blog-2819364351396989881.post-83910454384438060232011-09-25T07:27:00.000-07:002011-09-25T07:44:25.307-07:00Streamlining patch submissionI have spent considerable time fixing up HBase builds on Apache Jenkins. I want to share a few points for HBase contributors so that it is easier to maintain stable builds.<br />
<br />
For contributors, I understand that it takes so much time to run whole test suite that he/she may not have the luxury of doing this - Apache Jenkins wouldn't do it when you press Submit Patch button.<br />
If this is the case, please use Eclipse (or other tool) to identify tests that exercise the classes/methods in your patch and run them. Also clearly state what tests you ran in the JIRA.<br />
<br />
If you have a Linux box where you can run whole test suite, it would be nice to utilize such resource and run whole suite. Then please state this fact on the JIRA as well.<br />
Considering Todd's suggestion of holding off commit for 24 hours after code review, 2 hour test run isn't that long.<br />
<br />
Sometimes you may see the following (from 0.92 build 18):<br />
<pre></pre><pre>Tests run: 1004, Failures: 0, Errors: 0, Skipped: 21
[INFO] ------------------------------<wbr></wbr>----
[INFO] BUILD FAILURE
[INFO] ------------------------------<wbr></wbr>----
[INFO] Total time: 1:51:41.797s</pre><pre> </pre><pre style="font-family: Georgia,"Times New Roman",serif;">You should examine the test summary above these lines and find out which
test(s) hung. For this case it was TestMasterFailover:</pre><pre style="font-family: Georgia,"Times New Roman",serif;"> </pre><pre style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;"></pre><pre style="font-family: Georgia,"Times New Roman",serif;"><span style="font-family: Times,"Times New Roman",serif;">Running org.apache.hadoop.hbase.master.TestMasterFailover</span><pre>Running org.apache.hadoop.hbase.<wbr></wbr>master.<wbr></wbr>TestMasterRestartAfterDisablin<wbr></wbr>gTable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.265 sec</pre><pre> </pre><pre style="font-family: Georgia,"Times New Roman",serif;"><b>Write deterministic test cases</b></pre><pre style="font-family: Georgia,"Times New Roman",serif;">Sometimes the new tests you added passed for you but fail when committer runs them.</pre><pre style="font-family: Georgia,"Times New Roman",serif;">Here is analysis for one such case - the original version of TestRegionServerCoprocessorExceptionWithAbort.</pre><pre style="font-family: Georgia,"Times New Roman",serif;"></pre></pre>The following call would create 25 regions for the test table:<br />
<div class="preformatted panel" style="border-width: 1px;"><div class="preformattedContent panelContent"><pre>TEST_UTIL.createMultiRegions(table, TEST_FAMILY);</pre></div></div>If we look at HBaseTestingUtility:<br />
<div class="preformatted panel" style="border-width: 1px;"><div class="preformattedContent panelContent"><pre>public static final byte[][] KEYS = {
HConstants.EMPTY_BYTE_ARRAY, Bytes.toBytes("bbb"),</pre></div></div>We can see that the row in the test<br />
<div class="preformatted panel" style="border-width: 1px;"><div class="preformattedContent panelContent"><pre>final byte[] ROW = Bytes.toBytes("bbb");</pre></div></div>actually doesn't belong to the first region.<br />
The following call was used to track the underlying region server: <br />
HRegionServer regionServer =<br />
TEST_UTIL.getRSForFirstRegionInTable(TEST_TABLE);<br />
<br />
This means regionServer which we were waiting for may not host the region where NPE happened.<br />
I modified the above line slightly and the test passed reliably:<br />
<div class="preformatted panel" style="border-width: 1px;"><div class="preformattedContent panelContent"><pre>final byte[] ROW = Bytes.toBytes("aaa");</pre><pre> </pre><pre><span style="font-family: Georgia,"Times New Roman",serif; font-size: small;">The takeaway is that scoping the test scenario properly and deterministically would simply patch submission.</span></pre></div></div><pre style="font-family: Georgia,"Times New Roman",serif;"><pre style="font-family: Georgia,"Times New Roman",serif;"></pre></pre>Ted Yuhttp://www.blogger.com/profile/08113013531845314169noreply@blogger.com0tag:blogger.com,1999:blog-2819364351396989881.post-77707639547937475512011-04-22T20:12:00.000-07:002011-07-15T21:12:19.964-07:00Managing connections in HBase 0.90 and beyondUsers of HBase have complained about high number of connections to zookeeper after upgrading to 0.90. Jean-Daniel, responding to user comments, did some initial work in <a href="https://issues.apache.org/jira/browse/HBASE-3734">HBASE-3734</a>.<br />
<br />
In the following discussion, the term connection refers to the connection between HBase client and HBase, managed by HConnectionManager.<br />
<br />
In the early days of 0.90 release, some decisions were made in <a href="https://issues.apache.org/jira/browse/HBASE-2925">HBASE-2925</a> where a new connection would be established given a Configuration instance without looking at the connection-specific properties in it.<br />
<br />
I made a comment in <a href="https://issues.apache.org/jira/browse/HBASE-3734">HBASE-3734</a> at 05/Apr/11 05:20 with the following two ideas:<br />
<ul><li>We should reuse connection based on connection-specific properties, such as<span class="code-quote"> "hbase.zookeeper.quorum" </span></li>
</ul><ul><li>In order for HConnectionManager.deleteConnection() to work, reference counting has to be used.</li>
</ul>I want to thank Karthick who is brave to bite the bullet and try to nail this issue through <a href="https://issues.apache.org/jira/browse/HBASE-3777">HBASE-3777</a>.<br />
He and I worked together for over a week to come up with the solution - patch version 6.<br />
<br />
We discovered a missing connection property, HConstants.ZOOKEEPER_ZNODE_PARENT, which caused TestHBaseTestingUtility to fail.<br />
<br />
We refined the implementation several times based on the outcome of test results.<br />
<br />
Here is summary of what we did:<br />
<ul><li>Reference counting based connection sharing is implemented</li>
<li>There were <span class="il">33</span> references to HConnectionManager.getConnection(), we make sure that all of those references are properly deleted (released)</li>
<li>Some modification in unit tests is made to illustrate the recommended approach - see TestTableMapReduce.java</li>
</ul>For connection sharing, Karthick introduced the following:<br />
static class HConnectionKey { <br />
public static String[] CONNECTION_PROPERTIES = new String[] {<br />
HConstants.ZOOKEEPER_QUORUM,<br />
HConstants.ZOOKEEPER_ZNODE_PARENT,<br />
HConstants.ZOOKEEPER_CLIENT_PORT,<br />
HConstants.ZOOKEEPER_RECOVERABLE_WAITTIME,<br />
HConstants.HBASE_CLIENT_PAUSE,<br />
HConstants.HBASE_CLIENT_RETRIES_NUMBER,<br />
HConstants.HBASE_CLIENT_RPC_MAXATTEMPTS,<br />
HConstants.HBASE_RPC_TIMEOUT_KEY,<br />
HConstants.HBASE_CLIENT_PREFETCH_LIMIT,<br />
HConstants.HBASE_META_SCANNER_CACHING,<br />
HConstants.HBASE_CLIENT_INSTANCE_ID };<br />
<br />
In a Configuration, if any of the above connection properties is unique, we would create a new connection. Otherwise an existing connection whose underlying connection properties carry the same values would be returned. <br />
<br />
Initially attempt was made to use Java finalizer to clean up unused connections. It turned out object finalization is tricky - client may get a closed connection if multiple HTable instances are involved and some of them may go out of scope, leading to finalizer execution.<br />
<br />
So for HTable, we expect user to explicitly call close() method once the HTable instance would no longer be used. Take a look at the modified src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java<br />
<br />
Finally, we changed how TableOutputFormat closes connection. Previously HConnectionManager.deleteAllConnections(true) is called in TableRecordWriter() because it was an easy way to deal with connection leak. Now, calling table.close() is enough.<br />
<br />
In order to make 0.90.3 release stable, HBASE-3777 wouldn't be integrated into 0.90.3<br />
<br />
Epilog:<br />
When <a class="user-hover active" href="https://issues.apache.org/jira/secure/ViewProfile.jspa?name=ram_krish" id="issue_summary_assignee_ram_krish" rel="ram_krish">Ramkrishna</a> worked on <a href="https://issues.apache.org/jira/browse/HBASE-4052">HBASE-4052</a>, he discovered a problem in TRUNK which was not in 0.90 branch. Namely, after Master failover, HBaseAdmin.getConnection() would still get the shared connection which points to the previous active master. Using this stale connection results in an IOException wrapped in UndeclaredThrowableException.<br />
<br />
I provided fix for this problem through <a href="https://issues.apache.org/jira/browse/HBASE-4087">HBASE-4087</a> where HBaseAdmin constructor would detect such issue and ask HConnectionManager to remove the stale connection from its cache.<br />
<br />
Here is the code snippet: <br />
for (; tries < numRetries; ++tries) {<br />
try {<br />
this.connection.getMaster();<br />
break;<br />
} catch (UndeclaredThrowableException ute) {<br />
HConnectionManager.deleteStaleConnection(this.connection);<br />
this.connection = HConnectionManager.getConnection(this.conf); <br />
}Ted Yuhttp://www.blogger.com/profile/08113013531845314169noreply@blogger.com0tag:blogger.com,1999:blog-2819364351396989881.post-9391418425849958332011-04-14T09:57:00.000-07:002011-06-01T14:39:49.236-07:00Load Balancer in HBase 0.90Working with Stanislav Barton on Load Balancer in HBase 0.90, he asked for document on how load balancer works. This writeup would touch the internals of load balancer and how it evolved over time.<br />
<br />
Master code (including load balancer) has been rewritten for HBase 0.90<br />
<br />
When a region receives many writes and is split, the daughter regions are placed on the same region server as the parent region. Stan proposed to change this behavior and I summarized in <a href="https://issues.apache.org/jira/browse/HBASE-3779">HBASE-3779</a>.<br />
<br />
<a href="https://issues.apache.org/jira/browse/HBASE-3586">HBASE-3586</a> tried to solve the problem where load balancer moves inactive regions off an overloaded region server by randomly selecting regions to offload. This is to handle the potential problem of moving too many hot regions onto a region server which recently joined the cluster.<br />
<br />
But this random selection isn't optimal. For Stan's cluster, there're around 600 regions on each region server. When 30 new regions were created on the same region server, random selector only chose 3 out of the 30 new regions for reassignment. The other region selection was from inactive (old) regions. This is expected behavior because new and old regions were selected equally probably.<br />
<br />
Basically we traded some optimization for safety of not overloading a newly discovered region server.<br />
<br />
So I continued enhancement using <a href="https://issues.apache.org/jira/browse/HBASE-3609">HBASE-3609</a> where one of the goals is to remove randomness from LoadBalancer so that we can deterministically produce near-optimal balancing actions.<br />
<br />
If at least one region server joined the cluster just before the current balancing action, both new and old regions from overloaded region servers would be moved onto underloaded region servers. Otherwise, I find the new regions and put them on different underloaded servers. Previously one underloaded server would be filled up before the next underloaded server is considered.<br />
<br />
I also utilize the randomizer which shuffles the list of underloaded region servers.<br />
This way we can avoid distributing offloaded regions to few region servers.<br />
<br />
<a href="https://issues.apache.org/jira/browse/HBASE-3609">HBASE-3609</a> has been integrated into trunk as of Apr 18th, 2011.<br />
<br />
<a href="https://issues.apache.org/jira/browse/HBASE-3704">HBASE-3704</a> would help users observe the distribution of regions. It is currently only in HBase trunk code.<br />
<br />
Also related is <a href="https://issues.apache.org/jira/browse/HBASE-3373">HBASE-3373</a>. Stan proposed to make it more general. A new policy for load balancer can be added to have balanced number of regions per RS per table and not in total number of regions from all tables.<br />
<br />
If you're interested in more detail, please take a look at the javadoc for LoadBalancer.balanceCluster()<br />
<br />
For HBase trunk, I implemented <a href="https://issues.apache.org/jira/browse/HBASE-3681">HBASE-3681</a> upon <a class="user-hover active" href="https://issues.apache.org/jira/secure/ViewProfile.jspa?name=jdcryans" id="issue_summary_reporter_jdcryans" rel="jdcryans">Jean-Daniel Cryans</a>'s request. For 0.90.2 and later, the default value of sloppiness is 0.<br />
<br />
I am planning for the next generation of load balancer where request histogram would play an important role in deciding which regions to move. Please take a look at <a href="https://issues.apache.org/jira/browse/HBASE-3679">HBASE-3679</a><br />
<br />
HBaseWD project introduced multiple scanners for bucketed writes. I plan to accommodate this new feature through <a href="https://issues.apache.org/jira/browse/HBASE-3811">HBASE-3811</a> where additional attributes in Scan object would allow balancer to group the scanners generated by HBaseWD.<br />
<br />
<a href="https://issues.apache.org/jira/browse/HBASE-3943">HBASE-3943</a> is supposed to solve the problem where region reassignment disrupts (potentially long) compaction.<br />
<br />
<a href="https://issues.apache.org/jira/browse/HBASE-3945">HBASE-3945</a> tries to give regions more stability by not reassigning region(s) in consecutive balancing actions.Ted Yuhttp://www.blogger.com/profile/08113013531845314169noreply@blogger.com0tag:blogger.com,1999:blog-2819364351396989881.post-25194770427956073592011-03-31T21:29:00.000-07:002011-05-05T17:12:32.281-07:00Genericizing EndpointCoprocessorOur new release will use <a href="http://hbaseblog.com/2010/11/30/hbase-coprocessors/">coprocessor framework</a> of HBase<br />
<a href="https://issues.apache.org/jira/browse/HBASE-1512">HBASE-1512</a> provided reference implementation for aggregation.<br />
<br />
Originally value for every column is interpreted as Long.<br />
I made the implementation more generic through introduction of ColumnInterpreter which understands the schema of the underlying table (by examining column family:column qualifier, e.g.).<br />
Here're 3 guidelines I followed during development:<br />
<ol><li>User shouldn't modify HbaseObjectWritable directly for the interpreter class which is to be executed on region server. This is achieved by making ColumnInterpreter extend Serializable</li>
<li>We (plan to) store objects of MeasureWritable, a relatively complex class, in HBase. Using interpreter would give us flexibility in computing aggregates. </li>
<li>We load AggregateProtocolImpl.class into CoprocessorHost. Interpreter feeds various values (such as Long.MIN_VALUE) of concrete type (Long) into AggregateProtocolImpl. This simplifies class loading for CoprocessorHost </li>
</ol>During code review, we tried to distinguish the return value for the case where there is no result from a particular region by using null.<br />
<br />
However, we got the following due to type erasure:<br />
<br />
2011-04-26 17:55:48,229 INFO [IPC Server handler 3 on 64132] coprocessor.<br />
<div id=":12f"><wbr></wbr>AggregateImplementation(66): Maximum from this region is TestTable,,1303840188042.<wbr></wbr>18ec4a1af1b0931be64fc084d2eb93<wbr></wbr>09.: null<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(336): Unsupported type class java.lang.Object<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): writeClassCode<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): write<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): writeObject<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): write<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): writeObject<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): write<br />
2011-04-26 17:55:48,229 ERROR [IPC Server handler 3 on 64132] io.HbaseObjectWritable(339): run2011-04-26 17:55:48,229 WARN [IPC Server handler 3 on 64132] ipc.HBaseServer$Handler(1122): IPC Server handler 3 on 64132 caught: java.lang.<wbr></wbr>UnsupportedOperationException: No code for unexpected class java.lang.Object<br />
at org.apache.hadoop.hbase.io.<wbr></wbr>HbaseObjectWritable.<wbr></wbr>writeClassCode(<wbr></wbr>HbaseObjectWritable.java:343)<br />
at org.apache.hadoop.hbase.io.<wbr></wbr>HbaseObjectWritable$<wbr></wbr>NullInstance.write(<wbr></wbr>HbaseObjectWritable.java:311)<br />
at org.apache.hadoop.hbase.io.<wbr></wbr>HbaseObjectWritable.<wbr></wbr>writeObject(<wbr></wbr>HbaseObjectWritable.java:449)<br />
at org.apache.hadoop.hbase.<wbr></wbr>client.coprocessor.ExecResult.<wbr></wbr>write(ExecResult.java:74)<br />
at org.apache.hadoop.hbase.io.<wbr></wbr>HbaseObjectWritable.<wbr></wbr>writeObject(<wbr></wbr>HbaseObjectWritable.java:449)<br />
at org.apache.hadoop.hbase.io.<wbr></wbr>HbaseObjectWritable.write(<wbr></wbr>HbaseObjectWritable.java:284)<br />
at org.apache.hadoop.hbase.ipc.<wbr></wbr>HBaseServer$Handler.run(<wbr></wbr>HBaseServer.java:1092)<br />
<br />
The solution is to apply Writable.class for null value.<br />
<br />
We didn't consider race condition in callbacks on the client side. See <a href="https://issues.apache.org/jira/browse/HBASE-3862">HBASE-3862</a> <br />
<wbr></wbr></div>Ted Yuhttp://www.blogger.com/profile/08113013531845314169noreply@blogger.com0