Apache Spark established itself as the primary "programming language" of big data. Its multifaceted architecture elegantly addresses the challenges traditionally were hard when dealing with large amounts of data. Despite all the benefits, Apache Spark has a high barrier of entry for professionals without coding or big data background.
That reminds me of the time when personal computers initially hit the market. Users had to be proficient in programming in order to use the computer effectively. Over time that bar is lowered so low that programming knowledge is no longer required.
Apache Spark is going through similar paradigms. Using the tool like DataRow.io, you can build a complex data pipeline with just a few clicks even if you don't have any Apache Spark knowledge. Now that enables teams to produce results faster as they are not dependent/blocked by data engineers who are proficient in Apache Spark.