- Le plus récent
- Le plus de votes
- La plupart des commentaires
Re: T2 vs F1
On T2 you can: build and run for HW emulation; build for HW
On F1 you can: build and run for HW emulation; build and run for HW
So you can build for HW on T2, create the AFI on T2 and then move to a F1 instance to run your project on HW.
Re: This particular example
I double checked this example. It actually cannot run on F1 because it relies on a shell feature not available on F1. It should not have been included in the aws-fpga repo and we will look to remove it.
What particular aspect of this example was of interest to you? We may have other examples that demonstrate what you are interested in.
Hi Frankie,
This example is supported today on F1, and is part of the Vitis example included in the aws-fpga repo:
https://github.com/Xilinx/Vitis_Accel_Examples/tree/bb80c8ec699c3131e8874735bd99475ac6fe2ec7/rtl_kernels/rtl_streaming_free_running
All the Vitis Github examples follow the same build instructions listed here:
https://xilinx.github.io/Vitis_Accel_Examples/master/html/compile_execute.html
The Makefile uses the DEVICE variable to specify the target platform. When targeting F1, you simply need to set DEVICE=$AWS_PLATFORM
The following commands will build and run the example for HW emulation targeting F1.
cd aws-fpga/Vitis/examples/xilinx/rtl_streaming_free_running
make clean
make check TARGET=hw_emu DEVICE=$AWS_PLATFORM all
To build for HW on F1:
make TARGET=hw DEVICE=$AWS_PLATFORM all
Then create the AFI as usual before running.
Oh wait, you cannot do this on a T2 then--it has to be done directly on F1? UPDATE I tried on F1 and got same result shown below (i did source vitis_setup.shfor and they were both built from the Amazon FPGA developer AMI):
When running on T2:
[centos@ip-172-31-11-211 rtl_streaming_free_running]$ make clean
utils.mk:65: [WARNING]: g++ version older. Using g++ provided by the tool : /opt/Xilinx/Vivado/2019.2/tps/lnx64/gcc-6.2.0/bin/g++
basename: missing operand
Try 'basename --help' for more information.
rm -rf vadd_stream /{*sw_emu*,*hw_emu*}
rm -rf profile_* TempConfig system_estimate.xtxt *.rpt *.csv
rm -rf src/*.ll *v++* .Xil emconfig.json dltmp* xmltmp* *.log *.jou *.wcfg *.wdb
[centos@ip-172-31-11-211 rtl_streaming_free_running]$ make check TARGET=hw_emu DEVICE=$AWS_PLATFORM all
utils.mk:65: [WARNING]: g++ version older. Using g++ provided by the tool : /opt/Xilinx/Vivado/2019.2/tps/lnx64/gcc-6.2.0/bin/g++
Makefile:118: *** This example is not supported for /home/centos/src/project_data/aws-fpga/Vitis/aws_platform/xilinx_aws-vu9p-f1_shell-v04261818_201920_2/xilinx_aws-vu9p-f1_shell-v04261818_201920_2.xpfm. Stop.
Running on F1
[centos@ip-172-31-28-216 aws-fpga]$ cd Vitis/examples/xilinx/rtl_kernels/rtl_streaming_free_running/
[centos@ip-172-31-28-216 rtl_streaming_free_running]$ make clean
utils.mk:65: [WARNING]: g++ version older. Using g++ provided by the tool : /opt/Xilinx/Vivado/2019.2/tps/lnx64/gcc-6.2.0/bin/g++
basename: missing operand
Try 'basename --help' for more information.
rm -rf vadd_stream /{*sw_emu*,*hw_emu*}
rm -rf profile_* TempConfig system_estimate.xtxt *.rpt *.csv
rm -rf src/*.ll *v++* .Xil emconfig.json dltmp* xmltmp* *.log *.jou *.wcfg *.wdb
[centos@ip-172-31-28-216 rtl_streaming_free_running]$ make TARGET=hw DEVICE=$AWS_PLATFORM all
utils.mk:65: [WARNING]: g++ version older. Using g++ provided by the tool : /opt/Xilinx/Vivado/2019.2/tps/lnx64/gcc-6.2.0/bin/g++
Makefile:118: *** This example is not supported for /home/centos/aws-fpga/Vitis/aws_platform/xilinx_aws-vu9p-f1_shell-v04261818_201920_2/xilinx_aws-vu9p-f1_shell-v04261818_201920_2.xpfm. Stop.
Thank you for the quick response Thomas!!
Ah ok I see!
I'm currently working on adapting the HDK vadd_vhdl example but was hoping to achieve the same thing in Vitis too and pick whichever one is better suited. Basically I am looking into synthesizing my own Verilog code, and send packets over the network to the FPGA for parsing from user space C code
(mentioned here: https://forums.aws.amazon.com/thread.jspa?messageID=948816󧩐).
The streaming example seemed valuable to me because I would like the FPGA to take a stream of these packets at potentially irregular timing and output varying size results after they've been parsed. I want things to be bit oriented and thus far my HDK adaptation is likely going to use peek and poke which is byte oriented. That could work if I can leverage AXI functionality like WVALID and AWVALID--I'm not sure the full AXI options are available though as I've seen mostly AXI4-Lite used. This HDK path seems promising but if there's an easier/efficient way with Vitis, I don't want to rule that out!
On F1, the Vitis shell only allows transferring data between the host application and the kernels via global memory. In other words, you cannot directly stream data from the host to your kernel. The host has to move data to global memory, and then your kernel will need to grab it from there.
The kernels can however perform streaming transfers with other kernels. The rtl_streaming_k2k_mm example shows two kernels sharing a streaming connection:
https://github.com/Xilinx/Vitis_Accel_Examples/tree/bb80c8ec699c3131e8874735bd99475ac6fe2ec7/rtl_kernels/rtl_streaming_k2k_mm
This example will run on F1.
So, if the network data will presumably have to go through the host, then transferring to global memory is necessary?
I've been pointed towards Virtual Ethernet, but that seems like a complicated thing to include for my project and I would rather keep things simple if possible. Is there an example for the fastest or proper way to push packets at irregular timing from the host to the FPGA and back?
So, if the network data will presumably have to go through the host, then transferring to global memory is necessary?
Correct.
I've been pointed towards Virtual Ethernet, but that seems like a complicated thing to include for my project and I would rather keep things simple if possible. Is there an example for the fastest or proper way to push packets at irregular timing from the host to the FPGA and back?
I am not aware of such an example. But the mechanism for sending data from the host to the device is pretty straightforward. When your packet is ready, you would call an API like clEnqueueMigrateMemObject or clEnqueueWriteBuffer to copy it to the device, and then use clEnqueueTask to start your kernel.
Contenus pertinents
- demandé il y a 2 mois
- demandé il y a un an
- Réponse acceptéedemandé il y a un an
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 2 ans