Thursday, January 30, 2014

Unable to create new native thread and max user processes

very often i have seen "Unable to create new native" error when i am writing Multi threaded programming in Java on Linux box


The error stack usually shout like below!
ava.lang.OutOfMemoryError: unable to create new native thread 
        at java.lang.Thread.start0(Native Method) 
        at java.lang.Thread.start(Thread.java:691) 

One of the reason for this is number of Max User Processes for given user in your Linux box is very low!
you can check the number of Max User Processes by typing the command ulimit -a



[vanji@myMachine area51]$ ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 95153
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024 
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

So how do we increase the max user processes ?


Follow the step:

vi /etc/security/limits.conf  and add below the menstioed

*          soft     nproc          65535
*          hard     nproc          65535
*          soft     nofile         65535
*          hard     nofile         65535

Save and Exit check the user max processes ulimit



[vanji@myMachine area51]$ ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 95153
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65535 
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited





Wednesday, January 29, 2014

I-nodes and Cassandra

While I am working on Cassandra and WSO2 Message Broker every thing worked fine for some countable  days. On one blue moon day, I got a strange behavior from Message broker and started to shout "Cassandra is not available"!!!

So i quickly went down and check the Cassandra logs and found below error with parallel time to the  time that Message broker start to cry!


INFO [CompactionExecutor:169] 2014-01-24 20:54:07,686 AutoSavingCache.java (line 250) Saved KeyCache (31 items) in 29 ms
ERROR [CompactionExecutor:170] 2014-01-25 00:54:07,685 CassandraDaemon.java (line 185) Exception in thread Thread[CompactionExecutor:170,1,main]
FSWriteError in /esb/apacheCassandra1/bin/./repository/database/cassandra/saved_caches/system-local-KeyCache-b.db1519539237124082054.tmp
        at org.apache.cassandra.io.util.SequentialWriter.flushData(SequentialWriter.java:263)
        at org.apache.cassandra.io.util.SequentialWriter.flushInternal(SequentialWriter.java:215)
        at org.apache.cassandra.io.util.SequentialWriter.syncInternal(SequentialWriter.java:187)
        at org.apache.cassandra.io.util.SequentialWriter.close(SequentialWriter.java:377)
        at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:171)
        at org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:234)
        at org.apache.cassandra.db.compaction.CompactionManager$10.run(CompactionManager.java:860)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: No space left on device
        at java.io.RandomAccessFile.writeBytes(Native Method)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:499)
        at org.apache.cassandra.io.util.SequentialWriter.flushData(SequentialWriter.java:259)
        ... 12 more



Error says that the root cause of this issue is "No space left on device"!, Quickly i double check the system disk size with df -h and df Linux commands and found that there is enough disk space!


After some time of spending in internet i got the answer for what i want!! the root cause for above error is the number of Inodes  for given partition were fully utilized (100%)  and when Cassandra try to get one there is "No Space left on device"!

Okay this might be wired story for you, let me explain one by one!

What is Inode?

An i-node is a data structure found in many Unix file systems. Each i-node stores all the information about a file system object. It does not store the file's data content and file name except for certain cases in modern file systems.

More easily, it is a “database” of all file information except the file contents and the file name.

How to check the INode status of the disk?

if you type the df -i Linux command, you will get the whole detail of the i-node of the partitions.




The "no space left on device" error is not necessary caused by running out of storage capacity, as it suggests, it can also be cause by running out of i-nodes on the file-system  In other words, a given file system can only contain so many files. Running df will suggest everything is fine.

It's not uncommon in situations where you know you'll have a lot of small files to build a file system with an explicitly larger inode table.

is it possible change the INode dynamically?

Hmm the answer is No!
So how to do it? The answer that i found in [3] is that you need to back-up your data, and create new file-system  and restore your data.

So how to prevent the Cassandra failure from this issue?

So we can do three prevention,
1)You either need to delete some files that you do not need any more. But if some one writing on the file it may lost
2)put the system on a different file-system with high i-nodes.
3)Fine tune the Cassandra according to the need!
   Example:-
   flush_largest_memtables_at: 0.45
   reduce_cache_sizes_at: 0.85
   recude_cache_capacity_to: 0.6
   commitlog_total_space_in_mb: 16
   commitlog_segment_size_in_mb: 16
   memtable_total_space_in_mb: 512



[1] http://en.wikipedia.org/wiki/Inode
[2]http://unix.stackexchange.com/questions/26598/how-can-i-increase-the-number-of-inodes-in-an-ext4-filesystem
[3]http://www.datastax.com/dev/blog/cassandra-file-system-design

Tuesday, January 21, 2014

How Install the Oracle SQL Developer on Ubuntu

I was searching to install SQL Developer for my Ubuntu machines. Since the oracle SQL Developer install is not in Ubunutu's repository, So I ended up with downloading the "rpm" installation from Oracle's Web Site.


Once i downloaded the installation file

1) First install alien:
sudo apt-get install alien

2) Then, convert the rpm file to a deb file:
sudo alien --scripts -d sqldeveloper-4.0.0.13.80-1.noarch.rpm

3) Then we run the deb file generated
sudo dpkg -i sqldeveloper_4.0.0.13.80-2_all.deb

4) Create the following directory in your home folder, this where it's going to store path to the jdk in the next step:
mkdir .sqldeveloper/

5) Run sqldeveloper once from the terminal
sudo /opt/sqldeveloper/sqldeveloper.sh

6) Once you run this (5) first time go to directory that is created in (4) and make the changes
Enter the full path to Java 7

If you've got the openjdk, it'll be:
/usr/lib/jvm/java-7-openjdk

For the official one it'll be:
/usr/lib/jvm/java-7-sun


Note that Oracle SQL Developer requires JAVA 7