NetApp ONTAP Simulator is freely available, and allows anyone to test out their storage platform without having to own a physical array. In the past I’ve used the NetApp Edge VSA, but since some months this is not available anymore, and the simulator is the only way to go. In this article, I’ll show you how to install and configure the Simulator with its latest version 8.3 RC1, and connect it to a vSphere cluster.
Deployment
First, you need to download the ONTAP Simulator. This part is pretty easy, you just need to go to this web page, login or register to the NetApp support, and download the Simulator itself. You need to download the ESX version, plus the license codes.
Once you’ve downloaded the compressed archive, let’s deploy it. You need to expand the archive, and then upload it to an ESXi datastore using the datastore browser.
By default, the simulator comes with two virtual shelves of 14 x 1GB disks, so 28 disks in total. With 3 disks taken for the dedicated Clustered ONTAP root aggregate, this leaves 25 disks for a data aggregate, of which two will be parity disks, giving a maximum usable space of around 23GB. It’s almost impossible to run any serious VM on it, but it’s possible to get close to 400GB usable space by editing the configuration. Some commands you may found in Internet relate to previous versions and they have changed from 8.2 to 8.3; I will show you here the updated commands for 8.3.
Before the first boot, you will need to change the 4th virtual disk of the simulator. This is created as “sparse” and is not supported in ESXi. I’ve read around the workaround of loading the multiextent module in the kernel (via the command vmkload_mod multiextent), but honestly it’s a really ugly solution, it’s unsupported by VMware, and it does not survive reboots unless you store the command in a start-up script. Better to convert first of all the disk to a peoper thin format. Also, a sparse disk cannot be extended in ESXi, and we will need to expand it at some point.
So, first remove the disk from the powered off virtual machine without deleting it:
In the command line, go into the folder where the disk in located, and run this commands:
vmkfstools -i DataONTAP-sim.vmdk thin.vmdk -d thin (clone the disk into a thin disk)
vmkfstools -U DataONTAP-sim.vmdk (deletes the old sparse disk)
vmkfstools -E thin.vmdk DataONTAP-sim.vmdk (renames the cloned disk like the original).
After these operations, reconnect the new thin disk to the virtual machine in the same IDE channel. Also, before powering on the virtual machine, edit the original 4 network cards and configure them based on your network. Power it on, and run these commands:
1. Press Ctrl-C for Boot Menu when prompted
2. Enter selection 4 ‘Clean configuration and initialize all disks’ and answer ‘y’ to the two prompts. Wait for the procedure to complete, it will reboot the VSA atuomatically
3. at the setup screen, type exit to go into the prompt, and login with admin with no password
4. set the password for the admin user with: security login password
5. security login unlock -username diag
6. security login password -username diag (enter the new password twice)
7. set -privilege diagnostic (and press y. The old command was “advanced” but systemshell is no more under advanced in 8.3)
8. systemshell local (and login with the diag user that was unlocked on step 5)
9. setenv PATH “${PATH}:/usr/sbin”
10. echo $PATH
11. cd /sim/dev/,disks
12. ls (see all the disks listed)
13. sudo rm v0*
14. sudo rm v1*
15. sudo rm ,reservations
16. cd /sim/dev
17. vsim_makedisks -h (we will the type 36)
18. sudo vsim_makedisks -n 14 -t 36 -a 0
19. sudo vsim_makedisks -n 14 -t 36 -a 1
20. sudo vsim_makedisks -n 14 -t 36 -a 2
21. sudo vsim_makedisks -n 14 -t 36 -a 3
22. ls ,disks/ (we now have 4 shelves with 14 disks each, all at 9 GB in size)
23. exit
24. system node halt local
At this point, to accomodate the new disks, we need to expand the containing vmdk disk. Additional problem, the disk is in IDE format, so it cannot be expanded live. Honestly, summing this issue and the original sparse format, I’m not sure why this is listed as the “ESX” version of the simulator, anyway… after powering down the appliance:
1. remove again the disk from the VM without deleting it
2. edit the vmdk descriptor file in the command line and change ddb.adapterType from “ide” to “lsilogic”
3. add again the disk to the VM. Now it will be listed as scsi and it can be expanded to 550GB (to accomodate the additional shelves and disks we created before)
4. Remove the vmdk from the VM
5. again, edit the vmdk file descriptor and change back ddb.adapterType from “lsilogic” to “ide”
6. add for the final time the IDE vmdk to the VM
Now, with finally the disk expanded, time for another bunch of commands:
1. power on the SIM
2. Invoke menu with CTRL+C when offered the option
3. select option 5
5. disk show (we check disks 0.16 0.17 and 0.18 are assigned for the system aggregate, all other 53 disks should be assigned later)
6. halt
7. power cycle the simulator
8. Ctrl-C for Boot Menu when prompted
9. select option 4 and wait the end of the process again, this time with many more and bigger disks it will take quite longer
10. configure the node management network as proposed
11. login with the admin user and run cluster setup
12. my choices are to create a new cluster and be it a single node cluster for simplicity
13. the base license is in the text file downloaded together with the simulator
Once the cluster node is up and running, we still need to assign all the created disks to the node and configure a second aggregate to hold your data. In the command line of the simulator:
1. system show (to retrieve the node name, in my case dataontap-01)
2. storage disk assign -all true -node dataontap-01
3. system node run -node dataontap-01 options disk.maint_center.spares_check off
4. storage aggregate create -aggregate dataontap01_01 -diskcount 53 -nodes dataontap-01 -maxraidsize 28
Configuration
The Simulator has an integrated web interface for management, the OnCommand System Manager. You can reach it by connecting over https to the IP assigned to the cluster. First, we verify the new aggregate has all the 53 new disks:
Then, it’s time to create an SVM. SVM is Storage Virtual Machine, and is an intermediate object between the clients and the entire cluster. Without going into further details, you can read this great post by Cormac Hogan to learn more about Clustered Data ONTAP and SVMs. For us, what’s important to know is that we need to configure the Simulator by creating and configuring at least one SVM. Everything happens here in Clustered Mode.
Before configuring the storage resources, you have to configure at least one subnet, since it will be requested in other wizards. Go into Cluster, select the cluster, open Configuration – Network and create a subnet representing the storage network you are using:
Here you define the subnet the storage will be connected to, and at least two IP addresses that will then be used by SVM. Once you have the subnet, open the “Storage Virtual Machines” section of the System Manager. Here you see the cluster with no SVM yet. You need to hit “Create” to start the configuration process:
You give here a name to the new SVM, the protocols you want it to support, and you select dataontap01_01 as the root aggregate to store the SVM volumes into. This aggregate is the one we created before. In the following step, you create a target alias for iscsi (if you enabled it like me), and select the subnet created before. You can also immediately create a new LUN for your vSphere cluster if you want. I’m going to do it in a second step, as I want to better configure the host initiators that will be authorized to access the LUNs:
Finally, you configure the username and password of the SVM (remember, SVM is all about multi-tenancy, so in a production environment you can give access only to a specified SVM instead of the entire cluster), and assign an additional interface and IP for management. One IP of the subnet has been assigned to the data interface (the one that will be pointed to from vSphere to connect via iSCSI), the other for the dedicated management interface of the SVM. The SVM itself is now ready to be used:
Create your iSCSI datastore
Once the SVM is up and running, it’s time to export an iSCSI volume to be used as a new datastore by vSphere. In the SVM, go under Storage -> LUNs, select the tab Initiator Groups and create a new one. It’s good to group all the ESXi hosts of a cluster into one single group, so whenever there’s a new LUN, it’s easy to simply authorize the entire group instead of manually adding each single host. Also, if you add or remove an ESXi hosts from the group, permissions are automatically updated:
Now, go into the tab “LUN Management” and click Create. The LUN wizard is started. The first activity is to assign name, size and type to the LUN:
In the next steps, I chose to create a new flexible volume to hold the LUN, based on the aggregate I created at the beginning:
Finally, I assign permissions to access this LUN to the Initiator Group I created before:
I skip the quality of service part and complete the wizard. The LUN is ready to be used by my vSphere cluster. To confirm which IP address you need to connect to, you can quickly check in Configuration -> Protocols -> iSCSI:
10.2.70.147 is the IP address that you need to use as the target in ESXi configuration. Once you’ve added this new IP address into the iSCSI targets of the ESXi hosts and run a rescan of the storage, you can see the new 100GB LUN:
With the usual datastore creation wizard you can now select the LUN and format it as a new VMFS volume.A quick rescan command at the cluster level, after configuring the new IP in the target section of each ESXi host, allow to see the new datastore in each ESXi host.
Hi….
i installed ontap 8.3 on my desktop(on vmware)…
can i mount that simulator to on my laptop (redhat 6.0)…
because i want to execute my testing scripts on it….
if there is another way please let me no….
i am mainly concentrating on Storage Automation testing…..
thanku ….
This simulator version is designed for ESXi server as explained in the post. If you need to deploy it into VMware workstation you need to download the correct version.
Hello Sir…..
i will download simulator related to VMware workstation…..
how to execute my perl test scripts on that simulator….
but if i install simulator, i can only execute the notap commands…..
there are no perl modules on that simulator ( i tried in 7.3 ontap simulator )…
how to over some from that…
thanku sir…
This is something you should ask to NetApp people…
ok thanku sit..
ok thanku sir…..
Hi! I followed your instructions and it worked perfectly fine until I started using it as datastores in my vSphere infrastructure. After some time the whole appliance went through a kernel panic and every time I was trying to reboot it it showed the same error message : “AIO I/O failed because of: No space left on device”. When I went to the maintenance mode and checked aggregate status I encountered the following error: “Broken Disk […] detected prior to assimilation.”. By the time it went down I barely managed to place 2-3 VMs with a total amount of 150GBs of disk space (via iSCSI). My question is: after you set up the appliance, did you actually try using it as a de facto datastore? Did you have to assign more space to the initial aggregate destined for the appliance?
I’ve never had any kind of issues in using this simulator, and yes I’ve used it 🙂
You better check the error on NetApp community, I’ve used a new dedicated aggregate as I’ve shown in the guide, not the defualt one (this one has indeed the operating system in it so filling it could maybe lead to problems..)
Thanks a lot for your confirmation. I will set it up again from scratch. Just for confirmation: Is the size of DataONTAP-sim.vmdk after making necessary modification in IDE mode 550GB? In your post you wrote “550MB” and I assummed it’s just a typo. Otherwise, did you increase the size of a system aggregate before creating a new one for the cluster? Thanks again for your great post!
Oh, you are right it’s a typo, it’s GB. Now it’s corrected, thanks again.
Full error message: [ONTAP-MAIN-01:disk.ioMediumError:warning]: Medium error on disk v.2.19: op 0x2a:0081d998:01d0 sector 0 SCSI:medium error – – If the disk is in a RAID group, the subsystem will attempt to reconstruct unreadable data (3 14 1 0) (28) [NETAPP VD-9000MB-FZ-520 0042] S/N [11462003] AIO I/O failed because of : No space left on device. The error message goes through all disks (S/N 11562201 etc.).
By the way, “vmkfstools -X 550g [netappsimname]_1.vmdk” inflates the disk without changing to lsilogic (esx 5.1U2 here), no need to delete and readd the disk as scsi, which mess up the vmx (insert useless entries).
Thanks for the addition Mario, appreciated.
Can you create HA pair in ontap 8.3 simulator to test failover ??
Yes, the simulator is a fully functional ONtap system, so with the 8.3 you can also use all the cDOT features like clustering. Deploy two simulators and group them in the same cluster.
Thanks a lot for sharing this information, I was wondering if it is possible to do an upgrade using the simulator through system manager from Ontap 8.3.1 to Ontap 8.3.2 that now is available.
Hi Jorge,
I’ve not tried to upgrade the Simulator yet. If someone wants to try and let me know the result here in the comments, that would be great!
I have not done an upgrade but these instructions worked for a clean install of 8.3.2
Great post on how to install and optimize the NetApp simulator thanks.I’ve posted an update to cover the ONTAP 9 simulator here:
http://www.flackbox.com/netapp-simulator/