synapse spark
69 TopicsPreview: Azure Synapse Runtime for Apache Spark 3.5
We’re thrilled to announce that we have made Azure Synapse Runtime for Apache Spark 3.5 for our Azure Synapse Spark customers in preview, while they get ready and prepare for migrating to Microsoft Fabric Spark. What Does This Mean for You? You can now create Azure Synapse Runtime for Apache Spark 3.5. The essential changes include features which come from upgrading Apache Spark to version 3.5 and Delta Lake 3.2. Please review the official release notes for Apache Spark 3.5 to check the complete list of fixes and features. In addition, review the migration guidelines between Spark 3.4 and 3.5 to assess potential changes to your applications, jobs and notebooks. For additional details check Azure Synapse Runtime for Apache Spark 3.5 documentation. What is next? We offer Azure Synapse Runtime for Apache Spark 3.5 to our Azure Synapse Spark customers. However, we strongly recommend that customers plan to migrate to Microsoft Fabric Spark to benefit from the latest innovations and optimizations exclusive to Microsoft Fabric Spark. For example, the Native Execution Engine (NEE) significantly enhances query performance at no additional cost. Starter pools allow the creation of a Spark session within seconds, unified security in the lakehouse enables the definition of RLS (Row-Level Security) and CLS (Column-Level Security) for objects in the lakehouse. Additionally, newly announced Materialized Views and many other features are available.54Views0likes0CommentsUpgrade to Azure Synapse runtimes for Apache Spark 3.4 & previous runtimes deprecation
It is important to stay ahead of the curve and keep services up to date. That's why we encourage all Azure Synapse customers with Apache Spark workloads to migrate to the newest GA version, Azure Synapse Runtime for Apache Spark 3.4. The update brings Apache Spark to version 3.4 and Delta Lake to version 2.4, introduces Mariner as the new operating system, and updates Java from version 8 to 11.4.1KViews1like0CommentsSave money and increase performance with intelligent cache for Apache Spark in Azure Synapse
Data professionals can now save money and increase the overall performance of repeat queries in their Apache Spark in Azure Synapse workloads using the new intelligent cache, now in public preview. This feature lowers the total cost of ownership by improving performance up to 65% on subsequent reads of files stored in the available cache for Parquet files and 50% for CSV files.5KViews1like6CommentsWriting data using Azure Synapse Dedicated SQL Pool Connector for Apache Spark
When using The Azure Synapse Dedicated SQL Pool Connector for Apache Spark, users can take advantage of read and write a large volume of data efficiently between Apache Spark to Dedicated SQL Pool in Synapse Analytics. The connector supports Scala and Python language on Synapse Notebooks to perform these operations.21KViews4likes11CommentsNotebook - This request is not authorized to perform this operation. , 403
This a quick post about this failure and how to fix: Error: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: The operation failed: 'This request is not authorized to perform this operation.', 40335KViews0likes17CommentsSpark Notebook error: Java.sql.SQLException:User does not have permissions to perform this action
While running spark notebook I hit the error: Error: java.sql.SQLException: com.microsoft.sqlserver.jdbc.SQLServerException:User does not have permissions to perform this action5.4KViews0likes7CommentsQuery serverless SQL pool from an Apache Spark Scala notebook
Apache Spark notebooks in Azure Synapse Analytics workspace can execute T-SQL queries on a serverless Synapse SQL pool. This way you can leverage load data from some SQL table or view into your Apache Spark data frames apply some advanced data processing. In this article you will learn how to call SQL code form spark notebook.15KViews2likes3Comments