Wednesday, 2 December 2009

Silverlight 3 RIA Services Without Entities (POCO) – Part 1

(I’m including Part 1 in the title because there’s a possibility I’ll be adding to this topic.)

If you look for information about using Plain Old CLR Objects with RIA services (or using RIA services without Linq to Entities) you’ll see a few articles telling you how easy it is to use your own business objects from a DomainService - If you simply stick the [Key] attribute above a property that uniquely identifies your object, all will be well.

Except it’s not necessarily true. It all depends on how ‘smart’ your objects are, and how you expect it to work. If you have the (accurate) mental model of the objects being serialized and deserialized between client and server then you’re probably going to be OK.

On the other hand, if you’re working with less dumb objects that do more than just hold database fields, and you think of the RIA services as more like remoting then, like me, you’re going to be disappointed.

To be honest, my first clue should have been that everything has to be get/set and the classes need parameterless constructors. My ‘business objects’ had some internal state that they used to map themselves back to the database, so I marked that with [Exclude] attributes.

I tried sending these objects to the client via a query method, and then accepting them back to the server as parameters on an [Invoke] method. All the internal state had gone. But that was because they weren’t the same objects I returned from the query, they were newly constructed objects with the same values in their public properties.

The lesson was that there’s no reference tracking going on between client and server. If you send the client an object, it will be cloned on the client side. If that object is modified and passed back, the server gets a new object with all the values – not the original object with new values applied.

This probably seems obvious to many people, particularly those who have used similar frameworks (like the view model stuff in ASP.NET MVC) but it’s not something I saw explicitly called out in any of the explanations of using Silverlight RIA services with POCO.

Thursday, 15 October 2009

Connecting Android Phone to 2 Google Accounts (update)

Original article here

Recently, Google announced that they now support email over ActiveSync. So in theory you could set the phone to get email, calendar and contacts via ActiveSync and not use IMAP for email.

I tried setting this up, but found the email part of ActiveSync to be unreliable (at least with my “Google apps for your domain” account) – it complained about keys and lost the email from the phone’s inbox.

To be fair, they’re targeting the ActiveSync support at the iPhone and Windows Mobile, not Android.

Perhaps reliability will improve when the system has been running a bit longer, it would be nice not to have to use IMAP and ActiveSync to access the second Google account.

Wednesday, 26 August 2009

Connecting an Android Phone to 2 Google Accounts

If you’ve got an Android phone, and you want to get your contacts, email and calendar from two Google accounts (e.g. work and home) it’s possible but a bit clunky.

Set up the phone with the Google account you want to use most, this will be the account that the phone’s built-in Google apps will use. Note that if you change your mind about which account this should be you’ll need to reset you phone and so lose all your settings.

For the other account, add an Exchange/ActiveSync account for the server m.google.com (as if you were using Windows Mobile, as shown here) and set it to sync calendar and contacts (not email because Google sync currently support it). To pick up email, add an imap account as shown here.

Now your contacts and calendar entries that came from the main Google account will show up as from “Google” and the items from the second account will show up as “Exchange”.

Not perfect, but it works OK. Now I just need to figure out how to stop it from listing all the ActiveSync’d contacts in “lastname, firstname” format.

Also note that either of these Google accounts can be Google Apps For Your Domain (gafyd) accounts if imap and sync are enabled by your domain administrator.

Wednesday, 3 June 2009

Building a Network Cable Without a Crimp Tool

Searching the net to find out if it’s possible to make a cable without an RJ45 crimp tool yielded lots of people saying it’s not practical.

If you’ve only got a couple of ends to fit and don’t mind wasting a connector (or two) it’s not that bad.

Just splay the inner cores of the cable in the correct order and cut straight across the ends. Then feed the cores into the plug (making sure they go down to the ends of the correct holes). While keeping the wires pushed firmly into the plug, push down each metal piece in turn with a small flat-blade screwdriver.

Finally, give the cable a half-hearted tug to make sure they’re all fairly firmly stuck.

Obviously a crimp tool would give a much more reliable result, but at a push…

Tuesday, 19 May 2009

Getting the Original Location of an Assembly’s Shadow Copy

If a test framework has loaded a shadow copy of your test assembly, and you need the path it was built to, you can use the following:

            var codebase = new Uri(Assembly.GetExecutingAssembly().CodeBase);


            var assembly = new FileInfo(codebase.LocalPath);

Wednesday, 29 April 2009

Assembly.Load C++ CLI assembly gives Unverifiable code failed policy check

This is one of those problems that I couldn’t find the answer with google.

Trying to load a C++ mixed mode assembly (like a CLR class library) from a byte array instead of a file throws a FileLoadException.

            // ClassLib.dll is a mixed-mode assembly (from a C++ CLR Class Library project

 

            Assembly.Load("ClassLib"); // Works

 

            using (var stream = new FileStream("ClassLib.dll", FileMode.Open, FileAccess.Read))

            {

                var bytes = new byte[stream.Length];

                stream.Read(bytes, 0, bytes.Length);

 

                Assembly.Load(bytes); // System.IO.FileLoadException "Unverifiable code failed policy check. (Exception from HRESULT: 0x80131402)"

            }

Tuesday, 28 April 2009

Mock and Stub for Linq To Sql

All the information I could find on testing Linq to Sql (DLinq) followed one of the following approaches:

  • Using reflection to hack the internals of System.Data.Linq
  • Duplicating the database (e.g. to Sql Compact Edition)
  • Using the real database with rollback attributes on the test
  • Writing the linq code against a ‘repository’ rather than the designer-generated classes.

Here’s a way to mock the tables so that the linq to sql code can be tested against in-memory tables.

For the example code below, I’m using a dbml-generated DataContext which contains a reference to the Regions table of the Northwind database.

The wizard-generated code looks like this:

    [System.Data.Linq.Mapping.DatabaseAttribute(Name="Northwind")]

    public partial class MyDataDataContext : System.Data.Linq.DataContext

    {

       …

        public System.Data.Linq.Table<Region> Regions

        {

            get

            {

                return this.GetTable<Region>();

            }

        }

    }

And the method I want to test looks like this:

        public void DataAccess()

        {

            var context = new MyDataDataContext();

 

            var max = context.Regions.Max(r => r.RegionID);

            context.Regions.InsertOnSubmit(new Region { RegionID = max + 1, RegionDescription = "New region" });

 

            context.SubmitChanges();

        }

So first, create an interface to replace the use of MyDataDataContext. Here is an IMockableDataContext which describes the methods you might need from System.Data.Linq.DataContext:

    public interface IMockableDataContext : IDisposable

    {

        void SubmitChanges();

    }

Add IMyDataContext which describes the table properties, they are copied from the designer-generated MyDataContext, but the return types are changed to use IMockableTable instead of System.Data.Linq.Table:

    public interface IMyDataContext : IMockableDataContext

    {

        IMockableTable<Region> Regions { get; }

    }

IMockableTable is based on ITable and IQueryable<TEntity>, the same as System.Data.Linq.Table – so any code using the Regions property should work fine if the types are defined using ‘var’:

    public interface IMockableTable<TEntity> : ITable, IQueryable<TEntity>

    {

    }

IMockableTable is implemented by a wrapper class MockableTable. It can be constructed from either an object which implements ITable and IQueryable<TEntity> (like an instance of System.Data.Linq.Table) or from two separate objects (like a mocked ITable and an in-memory collection). All its members call into the object(s) supplied in the constructor.

    public class MockableTable<TEntity> : IMockableTable<TEntity>

    {

        private readonly ITable table;

        private readonly IQueryable<TEntity> queryable;

 

        public MockableTable(ITable table, IQueryable<TEntity> queryable)

        {

            this.table = table;

            this.queryable = queryable;

        }

 

        public MockableTable(ITable table)

            : this(table, (IQueryable<TEntity>)table)

        {

        }

 

        public IEnumerator<TEntity> GetEnumerator()

        {

            return queryable.GetEnumerator();

        }

 

        IEnumerator IEnumerable.GetEnumerator()

        {

            return ((IEnumerable)queryable).GetEnumerator();

        }

 

        public Expression Expression

        {

            get { return queryable.Expression; }

        }

 

        public Type ElementType

        {

            get { return queryable.ElementType; }

        }

 

        public IQueryProvider Provider

        {

            get { return queryable.Provider; }

        }

 

        public void InsertOnSubmit(object entity)

        {

            table.InsertOnSubmit(entity);

        }

 

        public void InsertAllOnSubmit(IEnumerable entities)

        {

            table.InsertAllOnSubmit(entities);

        }

 

        public void Attach(object entity)

        {

            table.Attach(entity);

        }

 

        public void Attach(object entity, bool asModified)

        {

            table.Attach(entity, asModified);

        }

 

        public void Attach(object entity, object original)

        {

            table.Attach(entity, original);

        }

 

        public void AttachAll(IEnumerable entities)

        {

            table.AttachAll(entities);

        }

 

        public void AttachAll(IEnumerable entities, bool asModified)

        {

            table.AttachAll(entities, asModified);

        }

 

        public void DeleteOnSubmit(object entity)

        {

            table.DeleteOnSubmit(entity);

        }

 

        public void DeleteAllOnSubmit(IEnumerable entities)

        {

            table.DeleteAllOnSubmit(entities);

        }

 

        public object GetOriginalEntityState(object entity)

        {

            return table.GetOriginalEntityState(entity);

        }

 

        public ModifiedMemberInfo[] GetModifiedMembers(object entity)

        {

            return table.GetModifiedMembers(entity);

        }

 

        public DataContext Context

        {

            get { return table.Context; }

        }

 

        public bool IsReadOnly

        {

            get { return table.IsReadOnly; }

        }

    }

So that MyDataContext can be treated as an IMyDataContext, we can take advantage of the partial declaration of MyDataContext and add an extra partial declaration which includes the explicit implementation of IMyDataContext to create a MockableTable wrapper for the Regions table.

    public partial class MyDataDataContext : IMyDataContext

    {

        IMockableTable<Region> IMyDataContext.Regions

        {

            get

            {

                return new MockableTable<Region>(Regions);

            }

        }

    }

The last stage in fitting the interfaces to the code is to modify the method to be tested:

        public void DataAccess()

        {

            var context = new MyDataDataContext();

            AddNewRegion(context);

        }

 

        public void AddNewRegion(IMyDataContext context)

        {

            var max = context.Regions.Max(r => r.RegionID);

            context.Regions.InsertOnSubmit(new Region { RegionID = max + 1, RegionDescription = "New region"});

 

            context.SubmitChanges();

        }

And now a test can be written (here I’m using NUnit and Moq):

        [Test]

        public void TestAddNewRegion()

        {

            var mockRegionTable = new Mock<ITable>();

            var mockRegionData = new[] { new Region { RegionID = 5, RegionDescription = "Here" }, new Region { RegionID = 9, RegionDescription = "There" } };

            var mockRegions = new MockableTable<Region>(mockRegionTable.Object, mockRegionData.AsQueryable());

            var mockContext = new Mock<IMyDataContext>();

            mockContext.SetupGet(c => c.Regions).Returns(mockRegions);

 

            var data = new DataClass();

            data.AddNewRegion(mockContext.Object);

 

            mockRegionTable.Verify(x => x.InsertOnSubmit(It.Is<Region>(r => r.RegionID == 10 && r.RegionDescription == "New region")));

            mockContext.Verify(c => c.SubmitChanges());

        }

It’s a fair bit of code, but most of it is a one-off:

  • IMockableDataContext and (I)MockableTable are reusable.
  • Adding tables to MyDataContext requires updating IMyDataContext and the partial declaration of MyDataContext.
  • Future mockable data contexts need a new IFooDataContext and partial declaraion of FooDataContext

Any comments, criticisms or suggestions for improvements are welcome!