NI Linux Real-Time Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Nodejs on the cRio?

Has anyone figured out how to install node on the cRio-9068?

I downloaded the source code and did a make install only to get a segmentation fault after about 2 hours. Then I tried node already built for the arm7, and since there isn't an install file, I manually put the files in usr/local/(bin, include, lib, share), corrected the permissions, and I wasn't able to run npm or node. It "couldn't find the file /usr/local/bin/node" even though it was clearly there. npm returns that it couldn't find node as well.

Is there any way to install a package manager that can install the arm7 version, so I don't have to do it manually? It isn't on opkg list.

Thanks

0 Kudos
Message 1 of 17
(5,729 Views)

Hey neilz, welcome!

The segfault, if I had to guess, was due to running out of memory and NI Linux RT's strict memory configuration (not a ton of memory, no swap, and disabiling overcommit). You could try this again by means of providing swap prior to trying to build, as I describe here: https://decibel.ni.com/content/message/107211#107211

As for the other issue using the pre-built, I am betting the issue there is that it is a binary that is build for ARMv7a hard-float (most are), and therefore is not runnable on the soft-float image that is used with NI controllers. You can verify this by checking the binaries using the readelf tool: readelf -a | grep Flags

If you're really adventurous, you could build the package using the same tools that we use to build the image itself: https://github.com/ni/nilrt

0 Kudos
Message 2 of 17
(4,459 Views)

Thanks Brad,

Could the segfault also be due to the stack size limitation? I work with David Staab, so this discussion is actually from my previous project. In my previous experience, an out of memory error doesn't manifest as a segmentation fault.  example:

Previously, out of memory would look like this:


cc1: out of memory allocating 952 bytes after a total of 22294528 bytes

As for readelf, here's my output

after running readelf -A ~/node-v4.3.2-linux-armv7l/bin/node

File Attributes

  Tag_CPU_name: "7-A"

  Tag_CPU_arch: v7

  Tag_CPU_arch_profile: Application

  Tag_ARM_ISA_use: Yes

  Tag_THUMB_ISA_use: Thumb-2

  Tag_FP_arch: VFPv3

  Tag_Advanced_SIMD_arch: NEONv1

  Tag_ABI_PCS_wchar_t: 4

  Tag_ABI_FP_denormal: Needed

  Tag_ABI_FP_exceptions: Needed

  Tag_ABI_FP_number_model: IEEE 754

  Tag_ABI_align_needed: 8-byte

  Tag_ABI_enum_size: int

  Tag_ABI_HardFP_use: SP and DP

  Tag_ABI_VFP_args: VFP registers

  Tag_CPU_unaligned_access: v6

  Tag_DIV_use: Not allowed

So, from what I understand the bold line means that it IS hard point, correct? In which case, I think its a dead end.

I guess I'll try using the nilrt tool!

0 Kudos
Message 3 of 17
(4,459 Views)

neilz wrote:

Thanks Brad,

Could the segfault also be due to the stack size limitation? I work with David Staab, so this discussion is actually from my previous project. In my previous experience, an out of memory error doesn't manifest as a segmentation fault.  example:

Previously, out of memory would look like this:


cc1: out of memory allocating 952 bytes after a total of 22294528 bytes

Yeah, I'd forgotten that the segfault is usually the stack. Try uncapping the limit in your shell. Still probably doesn't hurt to have the swap.

ulimit -s unlimited

neilz continues:

As for readelf, here's my output

after running readelf -A ~/node-v4.3.2-linux-armv7l/bin/node

File Attributes

  Tag_CPU_name: "7-A"

  Tag_CPU_arch: v7

  Tag_CPU_arch_profile: Application

  Tag_ARM_ISA_use: Yes

  Tag_THUMB_ISA_use: Thumb-2

  Tag_FP_arch: VFPv3

  Tag_Advanced_SIMD_arch: NEONv1

  Tag_ABI_PCS_wchar_t: 4

  Tag_ABI_FP_denormal: Needed

  Tag_ABI_FP_exceptions: Needed

  Tag_ABI_FP_number_model: IEEE 754

  Tag_ABI_align_needed: 8-byte

  Tag_ABI_enum_size: int

  Tag_ABI_HardFP_use: SP and DP

  Tag_ABI_VFP_args: VFP registers

  Tag_CPU_unaligned_access: v6

  Tag_DIV_use: Not allowed

So, from what I understand the bold line means that it IS hard point correct? I think that's a dead end.

Yep, that looks like hardfp to me. For completeness, that weird error you were getting (the binary not being found when it is right there, staring at you) is from the binary calling out for the loader(interpreter), most likely /lib/ld-linux-armhf.so.3, and not being able to find the noted library loader. Of course, even if it could, it would then turn around and attempt to load softfloat libraries (that exist on the system) and fail, but at least it would fail with a saner message.

neilz continues:

I guess I'll try using the nilrt tool!

It can be a bit fiddly, just a warning. I'd definitely check to see if raising the stack limits fixes the segfault first.

0 Kudos
Message 4 of 17
(4,459 Views)

Based on the readelf output that looks like a hardfp binary so you are correct in assuming it won't work with the rest of the system.

You can increase the system wide stack size by editing the STACK_SIZE variable in /etc/default/rcS on target. Currently it is set to 256K.

If I recall correctly LabVIEW sets it's own 256K stack limit in the wrapper script that starts it so it should be safe to increase it system wide (but I have not tried it yet). Alternatively you can use 'ulimit -s' to increase it just for the binary you are running.

0 Kudos
Message 5 of 17
(4,459 Views)

gratian.crisan wrote:

Based on the readelf output that looks like a hardfp binary so you are correct in assuming it won't work with the rest of the system.

You can increase the system wide stack size by editing the STACK_SIZE variable in /etc/default/rcS on target. Currently it is set to 256K.

If I recall correctly LabVIEW sets it's own 256K stack limit in the wrapper script that starts it so it should be safe to increase it system wide (but I have not tried it yet). Alternatively you can use 'ulimit -s' to increase it just for the binary you are running.

I thought the segfault happened on compile, therefore simply setting the stack limit on the shell you're building from works. Of course, if I read that wrong, go with gratian.crisan's recommendation

0 Kudos
Message 6 of 17
(4,459 Views)

The stack size seemed to fix the issue,.

now I've run into the problem that in order to build it from source, I need a g++ of 4.8 or higher. The g++ package in opkg is 4.7.2-r20. At this point, is it time to pack up and go home?

I'm a bit concerned about diminishing returns on my time persuing this down the rabbit hole of building the compiler from source.

0 Kudos
Message 7 of 17
(4,459 Views)

That is certainly a deep rabbit hole. Before jumping in may I suggest giving the install from feeds via opkg another try?

nodejs4_0.4.12 is available in the 2015 release feeds (http://download.ni.com/ni-linux-rt/feeds/2015/arm/ipk/cortexa9-vfpv3/). I have not checked previous releases.

If you have the 2015 release installed on the target you should be able to install it by running:

opkg update

opkg install nodejs4

This is what I get if I grep for it (after running opkg update):

# opkg list | grep nodejs

nodejs4 - 0.4.12-r0.7 - nodeJS Evented I/O for V8 JavaScript  nodeJS Evented I/O for V8

nodejs4-dbg - 0.4.12-r0.7 - nodeJS Evented I/O for V8 JavaScript - Debugging files  nodeJS Evented

nodejs4-dev - 0.4.12-r0.7 - nodeJS Evented I/O for V8 JavaScript - Development files  nodeJS Evented

nodejs4-doc - 0.4.12-r0.7 - nodeJS Evented I/O for V8 JavaScript - Documentation files  nodeJS

Message 8 of 17
(4,459 Views)

Bleh, I'm guessing you're trying to build the latest from nodejs.org. Can you get by with older versions (as you'd want to build something from the 14.x nilrt branch to match your OS image)?

0 Kudos
Message 9 of 17
(4,459 Views)

yes, I can get by with older versions. I just realized I'm using the 2014 release feeds. nodejs is not on there, so I'll configure opkg to look at the 2015 feeds in order to install, which is obviously way easier than rebuilding the compiler lol.

0 Kudos
Message 10 of 17
(4,459 Views)