Pro or not, if your files are important to you, then you should follow the standard 3-2-1 rule of storage: have 3 copies of your data, 2 of them never touched (one-way add), 1 offsite. I'm a developer as well as a commercial photographer and an IT director for multiple businesses, so I move a *lot* of data on a daily basis. My main box has around 20TB of RAID 50+2 SAS storage with all my active data on it. That is backed up to an on-site NAS around with 100TB over 10GB-T, which is then compressed for off-site "cold" storage over the internet.
The reasoning behind the 3-2-1 design is that you only ever touch one copy, the fastest one, and usually on your box. The second copy is kept on-site for fast restore and backup. If you accidentally delete a whole folder, it's quicker and easier to pull from the on-site right back over to the live data. The third is offsite for safety, but is usually over a slow medium, like the internet, so it happens once a night. The third copy is directly compressed and pulled from the second copy, never the first.
The second copy can be simply an hourly differential mirror of the live, a daily backup, or a periodic copy of all changed files from the primary. The third is additive only - it will have copies of files you've even deleted locally. There's times where you'll delete something, and realize days later that you needed that or did not mean/notice/expect the deletion.
I have some 500K+ "finished" PSD files, ranging in size from 50mb to 900GB (and a few that are 1-2TB). Some of these are personal projects, most are client imaging. I keep these so that a client can call upon me to retrieve a past file for a current campaign. These, to me, are "free money", as they've already been edited, approved, and known by the client. For my personal work, I have film scans from back in the my early days shooting MF/LF all the way to current massively large digital files.
My offsite is utilizing the cold storage of the S3 Glacier Deep Archive on the AWS. It's cheaper since it's rarely accessed, has free inbound transfer, and cost $0.00099 per GB to store. I've written my own code that does the work to get it there, but in reality, any offsite backup will be fine matching your budget and needs. Some services offer unlimited storage for a fixed cost, like Carbonite, and that's the way to go. Remember that the issue for offsite is the retrieval - if you lost a file now and needed it immediately, it may take a while to download it back. In the case of the Glacier, it can be 12 hours before the file is even ready for retrieval. For consumer backups, it can take an hour or so to index, pull, and download all your stuff. But, it's better than losing all your work!
Size-wise, expect each subsequent store to be at least 2x the size of the closer, faster one. If you have 4TB of local storage, your second should be at least 8TB (2 full copies worth of backup versions), and the cold store should be at least 16TB (2 full copies of the intermediate backup). These are storage capacities, not data sizes. Just because you have 1TB of data on a 4TB drive doesn't mean that you won't eventually grow that data. You don't want to get to a point where your backup quits working (somethings without being noticed) because it ran out of space.
Never, never, ever, trust DVD/BR/CD media. Never. It'll corrupt or lose your data one day when you least expect it. I did a test a few years back with some friends where we all burned known data files (specific data patterns) onto different media types and brands. We made multiple copies on a Plextor at 8x for the best burn. Validated and tested each burn, then compared the burns to the source files to ensure "perfect" data matches. We then placed some in safes, others in cases on a shelf, and even a set in a disc case in the trunk of a car during autumn (a data tracker showed the hottest it ever got in there was 93F). We tested the discs after letting them sit for 6 months. They all had errors. All of them. One of the discs that was in the trunk even reported as "blank media" and allowed me to re-burn the disc! If the errors existed on a real data set, it would've corrupted an image somewhere while others would've been fine and been unknown until you tried to access that one specific file.
The foils on media are heated by a laser and altered to produce the pits and landings. Heat, gravity, and other factors can cause these to lose integrity and give ROMs difficult targets to retrieve your data. They're great for transferring, temporary storage, or as an additional backup, but should never be relied upon as the primary backup!
Also, be careful using anything like USB flash or SSDs for long term storage. Flash media can die/has died unexpectedly and is much harder to recover data from. I had a client that backed up piles of imaging, as well as their Quickbooks backups to a 256GB USB flash which they kept in their safe. One day, their desktop got hit by lightning and they went to retrieve data from the USB. The process of downloading all the files got the drive hot and it failed about half way through. SSDs are the same way. Great for running your box, acting as a workspace/scratch disk for your current project, but if anything happens to the drive, your chances of recovering any data vs old spinning rust is much lower (and costly).