Jetson TX2使用說明

博客原文地址: https://www.mikoychinese.top/post/20180209-jetson-tx2-wiki/

??Nvidia Jetson stands for a series of computation processor boards from Nvidia. The Jetson TK1, TX1 and TX2 models are all are carrying a Tegra processor from Nvidia. Nvidia claims that it is an AI supercomputer on a module, powered by NVIDIA Pascal architecture. Best of all, it packs this performance into a small, power-efficient form factor that’s ideal for intelligent edge devices like robots, drones, smart cameras, and portable medical devices. It supports all the features of the Jetson TX1 module while enabling bigger, more complex deep neural networks. If you want to know more useful information about Jetson TX2, you can scan Nvidia Jetson.

jetsonTX2配置.png

JetsonTX2外觀.jpg

Introduction:

??Because Jeston TX2 is so different from the other PC installed Ubuntu16.04, I reconized that it needs to write an article to guide other people who are the hardware and software developers and wanna learn it more to install and configure Jeston TX2. Finally, I am enormously grateful for the help from JetsonHacks, it provides so many useful tutorials and sources which is built in JetsonHacks Github.

Install JetPack(Ubuntu):

??When we get the Jetson TX2, it already installed an origin system(Ubuntu), but it has little usefulness. If we do use actraly to develop it for Deep Learning or other way to play what you want, you need to install the official system JetPack(This article is based on JetPack 3.1 which introduces L4T 28.1).

Release Highlight For JetPack 3.1 on Jetson TX2:

  • New L4T Production Release 28.1
  • TensorRT 2.1
  • cuDNN v6.0
  • Multimedia API v28.1
    • New functionality
      • TNRv2 (Temporal Noise Reduction)
        • High quality spatio-temporal noise reduction using GPU. Recommended for applications where low light video quality is important and GPU requirement is acceptable. Typical GPU utilization is <8.5% for 1080p30fps operation on Jetson TX1.
      • Piecewise linear WDR Support
        • ISP now supports cameras with “built-in WDR” that combine multiple exposures on-sensor and transmit the result with a piecewise linear encoding. Functionality verified using Sony IMX-185 (and reference driver is included). This feature does not include support for other WDR technologies such as DOL or spatial interleaving.
    • New samples
      • How to share CUDA buffer with v412 camera and then process color conversion (YUYV to RGB) with CUDA algorithm.
      • How to render video stream (YUV) or UI (RGB) with Tegra DRM (Direct Rendering Manager), i.e. rendering support for non-X11 and lightweight display system. Tegra DRM is implemented in user-space and is compatible with standard DRM 2.0.

Note:

When you start to install JetPack, you need to prepare something necessary.

  1. An desktop or laptop computer with an Intel or AMD x86 processor.
  2. A Micro USB and An Ethernet cable which will connect your PC and Jetson TX2.
  3. LAN(Local Area Network).

Installing Step:

  1. <font color=red>First Step:</font> Download JetPack
  2. <font color=red>Second Step:</font> Run JetPack on your PC
    • Cd your file's path.
      cd ~/Downloads
    • Set the file executable permission.
      chmod +x ./JetPack-L4T-3.1-linux-x64.run
    • Run it.
      ./JetPack-L4T-3.1-linux-x64.run
  3. <font color=red>Third Step:</font> Installing Interface
    • Read and watch it conscientiously. If it is no choice for you, you can click next no worry about it.
    • Full install or make your choice. As for me, I choosed no action OpenCV because this version is called OpenCV4Tegra and some libraries require different modules and such that require upstream OpenCV versions.


      installface.png
    • Accept all licenses.
    • Select the network layout(I recommend you select the first way.).
      002.png
    • Then you will select the network interface and it will show that all the network card in your PC. You can run the ifconfig command on your PC terminal, and choose the device which has IP net or is in your LAN. Such like this:
      Network Device
    • So you need to choose the wlp4s0 device.(The picture is catch by Internet, not my PC show. You should make choice depending on your PC.)


      003.png
    • Clicking the next, and you will obtain the interface from terminal where show you how to put your Jetson TX2 to force USB Recovery Mode and if you are ready, press the Enter key.
      1. Power off your device, the best way you do is that shutdown your Jetson TX2 and remove the power adapter for 3 seconds.
      2. Connect Your PC up to TX2 with Micro-USB.
      3. Power on your device, and immediately press and hold the RECOVERY button(REC on your board), meanwhile press the RESET(RES on your board) button and release it. If you see the TX2 board flash it's light for a while, that means that you are successful in Recovery Mode.
        004.png
      4. Also you can run lsusb in your PC terminal to show whether you are successful.(You can also skip this step.)
        005.png
    • Installing Process will spend some time, then the Jetson TX2 will power on and your PC will connect it by SSH(automatic).
      006.png
    • Finish in installing and you can remove all packages about JetPack.

2.Install Tensorflow:

??TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

??There are two ways to install Tensorflow in Jetson TX2, this article just show you install by wheel file. Because when I build the tensorflow by myself, I got a error: locale en_US, the issue address: https://github.com/bazelbuild/bazel/issues/4483.

Build Information and Tensorflow Version:

  • L4T 28.1 (JetPack 3.1)
  • CUDA 8.0
  • cuDNN 6.0
  • Tensorflow Version 1.3.0
  • Built with CUDA support

Install Preparation:

  1. Download the wheel file from https://github.com/jetsonhacks/installTensorFlowJetsonTX.git.
  2. Install the matching pip for your Python installation and install tensorflow_wheel_file in your download path.

    If you are Chinese, you can choose update your apt-sources like the 3 step next.
    • Python 2.7
      sudo apt-get install -y python-pip python-dev
      pip install <tensorflow_wheel_file>
      
    • Python 3.5
      sudo apt-get install -y python3-pip python3-dev
      pip3 install <tensorflow_wheel_file>
      
  3. Update apt-sources:(For Chinese)
    • 使用 HTTPS 可以有效避免國(guó)內(nèi)運(yùn)營(yíng)商的緩存劫持,但需要事先安裝 apt-transport-https
    • CD your apt-sources path.
      cd /etc/apt
      
    • Edit the sources file.
      1. sudo gedit ./sources.list
      2. Replace all the text by this:
      # 默認(rèn)注釋了源碼倉(cāng)庫(kù)缩多,如有需要可自行取消注釋
      deb https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial main restricted universe multiverse
      # deb-src https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial main main restricted universe multiverse
      deb https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-updates main restricted universe multiverse
      # deb-src https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-updates main restricted universe multiverse
      deb https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-backports main restricted universe multiverse
      # deb-src https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-backports main restricted universe multiverse
      deb https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-security main restricted universe multiverse
      # deb-src https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-security main restricted universe multiverse
      
      # 預(yù)發(fā)布軟件源暴凑,不建議啟用
      # deb https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-proposed main restricted universe multiverse
      # deb-src https://mirrors.ustc.edu.cn/ubuntu-ports/ xenial-proposed main restricted universe multiverse
      
    • Update apt-sources.
      sudo apt-get update
      

3.Build OpenCV:

Background:

??JetPack can install a CPU and GPU accelerated version of the OpenCV libraries, called OpenCV4Tegra, on the Jetson. OpenCV4Tegra is version 2.4.13 as of this writing. This is great for many applications, especially when you are writing your own apps. However, some libraries require different modules and such that require upstream OpenCV versions.

Installation:

  • Download the sources and build OpenCV:
    git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
    cd buildOpenCVTX2
    ./buildOpenCV.sh
    
    Note: If you need the OpenCV which is also working in python3, you should open the buildOpenCV.sh file and change it like this:
    Change the BUILD_opencv_python3=OFF to ON
    Also you can make the building in python2 OFF
    
  • CD your download path and make bulid:
    cd ~/opencv/build
    sudo make install
    
  • Check your library vaild:
    import cv2
    cv2.__version__
    

Note:

1.Use GStreamer and OpenCV Capture the Image of Camera

  • As I think, there are so many people who have theirselves tasks or goals, so I copy other developer's code to show how to use the camera in Jetson TX2. What you need is up to you.
# --------------------------------------------------------
# Camera sample code for Tegra X2/X1
#
# This program could capture and display video from
# IP CAM, USB webcam, or the Tegra onboard camera.
# Refer to the following blog post for how to set up
# and run the code:
#   https://jkjung-avt.github.io/tx2-camera-with-python/
#
# Written by JK Jung <jkjung13@gmail.com>
# --------------------------------------------------------

import sys
import argparse
import cv2
import numpy as np

windowName = "CameraDemo"

def parse_args():
    """
    Parse input arguments
    """
    parser = argparse.ArgumentParser(description=
                                     "Capture and display live camera video on Jetson TX2/TX1")
    parser.add_argument("--rtsp", dest="use_rtsp",
                        help="use IP CAM (remember to also set --uri)",
                        action="store_true")
    parser.add_argument("--uri", dest="rtsp_uri",
                        help="RTSP URI string, e.g. rtsp://192.168.1.64:554",
                        default=None, type=str)
    parser.add_argument("--latency", dest="rtsp_latency",
                        help="latency in ms for RTSP [200]",
                        default=200, type=int)
    parser.add_argument("--usb", dest="use_usb",
                        help="use USB webcam (remember to also set --vid)",
                        action="store_true")
    parser.add_argument("--vid", dest="video_dev",
                        help="video device # of USB webcam (/dev/video?) [1]",
                        default=1, type=int)
    parser.add_argument("--width", dest="image_width",
                        help="image width [1920]",
                        default=1920, type=int)
    parser.add_argument("--height", dest="image_height",
                        help="image width [1080]",
                        default=1080, type=int)
    args = parser.parse_args()
    return args

def open_cam_rtsp(uri, width, height, latency):
    gst_str = ("rtspsrc location={} latency={} ! rtph264depay ! h264parse ! omxh264dec ! "
               "nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! "
               "videoconvert ! appsink").format(uri, latency, width, height)
    return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

def open_cam_usb(dev, width, height):
    # We want to set width and height here, otherwise we could just do:
    #     return cv2.VideoCapture(dev)
    gst_str = ("v4l2src device=/dev/video{} ! "
               "video/x-raw, width=(int){}, height=(int){}, format=(string)RGB ! "
               "videoconvert ! appsink").format(dev, width, height)
    return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

def open_cam_onboard(width, height):
    # On versions of L4T previous to L4T 28.1, flip-method=2
    # Use Jetson onboard camera
    gst_str = ("nvcamerasrc ! "
               "video/x-raw(memory:NVMM), width=(int)2592, height=(int)1458, format=(string)I420, framerate=(fraction)30/1 ! "
               "nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! "
               "videoconvert ! appsink").format(width, height)
    return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

def open_window(windowName, width, height):
    cv2.namedWindow(windowName, cv2.WINDOW_NORMAL)
    cv2.resizeWindow(windowName, width, height)
    cv2.moveWindow(windowName, 0, 0)
    cv2.setWindowTitle(windowName, "Camera Demo for Jetson TX2/TX1")

def read_cam(windowName, cap):
    showHelp = True
    showFullScreen = False
    helpText = "'Esc' to Quit, 'H' to Toggle Help, 'F' to Toggle Fullscreen"
    font = cv2.FONT_HERSHEY_PLAIN
    while True:
        if cv2.getWindowProperty(windowName, 0) < 0: # Check to see if the user closed the window
            # This will fail if the user closed the window; Nasties get printed to the console
            break;
        ret_val, displayBuf = cap.read();
        if showHelp == True:
            cv2.putText(displayBuf, helpText, (11,20), font, 1.0, (32,32,32), 4, cv2.LINE_AA)
            cv2.putText(displayBuf, helpText, (10,20), font, 1.0, (240,240,240), 1, cv2.LINE_AA)
        cv2.imshow(windowName, displayBuf)
        key = cv2.waitKey(10)
        if key == 27: # ESC key: quit program
            break
        elif key == ord('H') or key == ord('h'): # toggle help message
            showHelp = not showHelp
        elif key == ord('F') or key == ord('f'): # toggle fullscreen
            showFullScreen = not showFullScreen
            if showFullScreen == True: 
                cv2.setWindowProperty(windowName, cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
            else:
                cv2.setWindowProperty(windowName, cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_NORMAL) 

if __name__ == "__main__":
    args = parse_args()
    print("Called with args:")
    print(args)
    print("OpenCV version: {}".format(cv2.__version__))

    if args.use_rtsp:
        cap = open_cam_rtsp(args.rtsp_uri, args.image_width, args.image_height, args.rtsp_latency)
    elif args.use_usb:
        cap = open_cam_usb(args.video_dev, args.image_width, args.image_height)
    else: # by default, use the Jetson onboard camera
        cap = open_cam_onboard(args.image_width, args.image_height)

    if not cap.isOpened():
        sys.exit("Failed to open camera!")

    open_window(windowName, args.image_width, args.image_height)
    read_cam(windowName, cap)
    
    cap.release()
    cv2.destroyAllWindows()

2. Opening the Max performance of TX2

  • Jetson TX2 consists of a CPU and a GPU cluster. Dual - core CPU clusters composed of 2 processors and quad - core ARM Cortex-A57 in Denver, connected by a high-performance interconnect architectures.
  • Next I will list all modes of Jetson TX2 and the detail about it.
mode of TX2
  • Check the mode Version:
    sudo nvpmodel -q –verbose
    
  • Open the Max Power:
    sudo nvpmodel -m 0
    
  • Show the Information of GPU:
    sudo tegrastats
    #If you see some cpus not opening, you can run like this:
    sudo su
    echo 1 > /sys/devices/system/cpu/cpu1/online
    
    #執(zhí)行~/jetson_clocks.sh可以開啟最大頻率
    ~/jetson_clocks.sh
    

3. About Host PC Failed to Fetch(sudo apt-get update failed)

sudo apt-get remove .*:arm64
sudo dpkg --remove-architecture arm64
#Then you can update your apt.
sudo apt-get update
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市期吓,隨后出現(xiàn)的幾起案子学搜,更是在濱河造成了極大的恐慌位谋,老刑警劉巖副签,帶你破解...
    沈念sama閱讀 206,968評(píng)論 6 482
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件遥椿,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡淆储,警方通過查閱死者的電腦和手機(jī)冠场,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,601評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來本砰,“玉大人碴裙,你說我怎么就攤上這事〉愣睿” “怎么了舔株?”我有些...
    開封第一講書人閱讀 153,220評(píng)論 0 344
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)还棱。 經(jīng)常有香客問我载慈,道長(zhǎng),這世上最難降的妖魔是什么诱贿? 我笑而不...
    開封第一講書人閱讀 55,416評(píng)論 1 279
  • 正文 為了忘掉前任娃肿,我火速辦了婚禮咕缎,結(jié)果婚禮上珠十,老公的妹妹穿的比我還像新娘。我一直安慰自己凭豪,他們只是感情好焙蹭,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,425評(píng)論 5 374
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著嫂伞,像睡著了一般孔厉。 火紅的嫁衣襯著肌膚如雪拯钻。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,144評(píng)論 1 285
  • 那天撰豺,我揣著相機(jī)與錄音粪般,去河邊找鬼。 笑死污桦,一個(gè)胖子當(dāng)著我的面吹牛亩歹,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播凡橱,決...
    沈念sama閱讀 38,432評(píng)論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼小作,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了稼钩?” 一聲冷哼從身側(cè)響起顾稀,我...
    開封第一講書人閱讀 37,088評(píng)論 0 261
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎坝撑,沒想到半個(gè)月后静秆,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,586評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡绍载,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,028評(píng)論 2 325
  • 正文 我和宋清朗相戀三年诡宗,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片击儡。...
    茶點(diǎn)故事閱讀 38,137評(píng)論 1 334
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡塔沃,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出阳谍,到底是詐尸還是另有隱情蛀柴,我是刑警寧澤,帶...
    沈念sama閱讀 33,783評(píng)論 4 324
  • 正文 年R本政府宣布矫夯,位于F島的核電站鸽疾,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏训貌。R本人自食惡果不足惜制肮,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,343評(píng)論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望递沪。 院中可真熱鬧豺鼻,春花似錦、人聲如沸款慨。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,333評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)檩奠。三九已至桩了,卻和暖如春附帽,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背井誉。 一陣腳步聲響...
    開封第一講書人閱讀 31,559評(píng)論 1 262
  • 我被黑心中介騙來泰國(guó)打工蕉扮, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人颗圣。 一個(gè)月前我還...
    沈念sama閱讀 45,595評(píng)論 2 355
  • 正文 我出身青樓慢显,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親欠啤。 傳聞我的和親對(duì)象是個(gè)殘疾皇子荚藻,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,901評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容