Skip to main content

Read from MS SQL BLOB column

In SQL SERVER 2005 the most common data type for storing BLOBs was an IMAGE  datatype. When SQL Server 2008 and later SQL SERVER  2008 R2 occured the image datatypes coexists with several new and recomended by Mictosoft BLOB datatypes such varbinary(MAX). Let`s have a look:


  • varbinary(max) /binary(n) variables store variable-length binary data of approximately n bytes, may store a maximum of 2 gigabytes.
  • image variables store up to 2 gigabytes of data and are commonly used to store any type of data file (not just images).


But when you`re designing a new database structure be patient and choose the newest one type according to the Microsoft note:
"ntext, text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and varbinary(max) instead.
Fixed and variable-length data types for storing large non-Unicode and Unicode character and binary data. Unicode data uses the UNICODE UCS-2 character set."
Now it`s time for coding. Asume that You working on a some business application that needs to retrieve a BLOB from the database by using clear ADO.NET. Code below is a simple C#  function which takes two parameters and returning a a new stream which contains BLOB data.


 private Stream ReadBlod(string blobCode, SqlConnection conn)
        {
            //SELECT from BLOB table
            string lSQLFileContent = "SELECT TOP 1 blob_File FROM blob_table WHERE blob_code = @code";
            //Stream for BLOB  object
            Stream lFileStream = Stream.Null;
            using (SqlCommand blobCmd = new SqlCommand(lSQLFileContent))
            {
                //Setting up command
                blobCmd.Connection = conn;
                blobCmd.Parameters.AddWithValue("@code", blobCode);


                //Test and open connection
                if (conn.State == ConnectionState.Closed)
                    conn.Open();


                //EXENTIAL Execution
                using (SqlDataReader reader = blobCmd.ExecuteReader(CommandBehavior.SequentialAccess))
                {
                    reader.Read();
                    long bytesize = reader.GetBytes(0, 0, null, 0, 0);
                    byte[] imageData = new byte[bytesize];
                    long bytesread = 0;
                    int curpos = 0;
                    while (bytesread < bytesize)
                    {
                        //Iteataion by 2kk bytes until EOF
                 bytesread += reader.GetBytes(0, curpos, imageData, curpos, 255);
                 curpos += 255;
                    }
                    //Creating stream from byte array
                    lFileStream = new MemoryStream(imageData);
                }
            }
            return lFileStream;
        }


Thank You.


Using list:

using System.Data.SqlClient;
using System.IO;


More info:

Popular posts from this blog

Playing with a .NET types definition

In the last few days I spent some time trying to unify structure of one of the project I`m currently working on. Most of the changes were about changing variable types because it`s were not used right way. That is why in this post I want to share my observations and practices with you. First of all we need to understand what ' variable definition ' is and how it`s different from ' variable initialization '. This part should be pretty straightforward:   variable definition  consist of data type and variable name only <data_type> <variable_name> ; for example int i ; . It`s important to understand how variable definition affects your code because it behaves differently depends weather you work with value or reference types. In the case of value types after defining variable it always has default value and it`s never null value. However after defined reference type variable without initializing it has null value by default. variable initialization  is

Using Newtonsoft serializer in CosmosDB client

Problem In some scenarios engineers might want to use a custom JSON serializer for documents stored in CosmosDB.  Solution In CosmosDBV3 .NET Core API, when creating an instance of  CosmosClient one of optional setting in  CosmosClientOptions is to specify an instance of a Serializer . This serializer must be JSON based and be of  CosmosSerializer type. This means that if a custom serializer is needed this should inherit from CosmosSerializer abstract class and override its two methods for serializing and deserializing of an object. The challenge is that both methods from  CosmosSerializer are stream based and therefore might be not as easy to implement as engineers used to assume - still not super complex.  For demonstration purpose as or my custom serializer I'm going to use Netwonsoft.JSON library. Firstly a new type is needed and this must inherit from  CosmosSerializer.  using  Microsoft.Azure.Cosmos; using  Newtonsoft.Json; using  System.IO; using  System.Text; ///   <

Using Hortonworks Hive in .NET

A few months ago I decided to learn a big data. This sounds very complex and of course it is. All these strange names which actually tells nothing to person who is new in these area combined with different way of looking at data storage makes entire topic even more complex. However after reading N blogs and watching many, many tutorials today I finally had a chance to try to write some code. As in last week I managed to setup a Hortonworks distribution of Hadoop today I decided to connect to it from my .NET based application and this is what I will describe in this post. First things first I didn`t setup entire Hortonworks ecosystem from scratch - I`d love to but for now it`s far beyond my knowledge thus I decided to use a sandbox environment provided by Hortonworks. There are multiple different VMs available to download but in my case I`ve choose a Hyper-V. More about setting this environment up you can read here . Picture 1. Up and running sandbox environment. Now whe