Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flintrock should try to raise ulimit for open files/connections #194

Closed
douglaz opened this issue Mar 28, 2017 · 2 comments
Closed

Flintrock should try to raise ulimit for open files/connections #194

douglaz opened this issue Mar 28, 2017 · 2 comments

Comments

@douglaz
Copy link
Contributor

douglaz commented Mar 28, 2017

We got "Too many open files" on HDFS when using a big machine (x1.32xlarge I think). Flintrock should try to raise the ulimits to avoid such problems.

2017-02-25 06:05:24,008 INFO [DataXceiver for client DFSClient_attempt_201702250604_0001_m_000777_0_590913881_319 at /172.24.34.247:44830 [Receiving block BP-1011058414-172.24.37.193-1487995665469:blk_1073744657_3833]] datanode.DataNode (BlockReceiver.java:receiveBlock(934))

  • Exception for BP-1011058414-172.24.37.193-1487995665469:blk_1073744657_3833
    java.io.IOException: Too many open files
    at sun.nio.ch.IOUtil.makePipe(Native Method)
    at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
    at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:409)
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
    at java.io.DataInputStream.read(DataInputStream.java:149)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
    at java.lang.Thread.run(Thread.java:745)
    2017-02-25 06:05:24,008 INFO [DataXceiver for client DFSClient_attempt_201702250604_0001_m_000618_0_396527967_321 at /172.24.32.100:39380 [Receiving block BP-1011058414-172.24.37.193-1487995665469:blk_1073744854_4030]] datanode.DataNode (BlockReceiver.java:receiveBlock(934)) - Exception for BP-1011058414-172.24.37.193-1487995665469:blk_1073744854_4030
    java.io.IOException: Too many open files
  • Flintrock version: master
  • OS: Ubuntu
@nchammas
Copy link
Owner

Dup of #148?

@douglaz
Copy link
Contributor Author

douglaz commented Mar 28, 2017

@nchammas it seems so

@douglaz douglaz closed this as completed Mar 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants