supercsv - CellProcessor for MapEntry -


i have simple pojo has map inside it.

public class product {     public map map; } 

then csv looks this:

"mapentry1","mapentry2","mapentry3" 

so created custom cell processor parsing those:

 public class mapentrycellprocessor {      public object execute(object val, csvcontext context) {          return next.execute(new abstractmap.simpleentry<>("somekey", val), context);      }  } 

and add entry setter method in product:

public void setname(entry<string, string> entry) {     if (getname() == null) {         name = new hashmap<>();     }     name.put(entry.getkey(), entry.getvalue()); } 

unfortunately means have 2 setter methods: 1 accepts map , 1 accepts entry doesn't work me (i have no control on how pojos generated). there other way can parse such csv , have setter accepts map in product?

it's possible write cell processor collects each column map. example, following processor allows specify key , map add to.

package org.supercsv.example;  import java.util.map;  import org.supercsv.cellprocessor.cellprocessoradaptor; import org.supercsv.cellprocessor.ift.cellprocessor; import org.supercsv.util.csvcontext;  public class mapcollector extends cellprocessoradaptor {      private string key;      private map<string, string> map;      public mapcollector(string key, map<string, string> map){         this.key = key;         this.map = map;     }      public mapcollector(string key, map<string, string> map,          cellprocessor next){         super(next);         this.key = key;         this.map = map;     }      public object execute(object value, csvcontext context) {         validateinputnotnull(value, context);         map.put(key, string.valueof(value));         return next.execute(map, context);     }  } 

then assuming product bean has field name of type map<string,string>, can use processor follows.

package org.supercsv.example;  import java.io.ioexception; import java.io.stringreader; import java.util.hashmap; import java.util.map;  import junit.framework.testcase;  import org.supercsv.cellprocessor.ift.cellprocessor; import org.supercsv.io.csvbeanreader; import org.supercsv.io.icsvbeanreader; import org.supercsv.prefs.csvpreference;  public class mapcollectortest extends testcase {      private static final string csv = "john,l,smith\n" +          "sally,p,jones";      public void testmapcollector() throws ioexception{         icsvbeanreader reader = new csvbeanreader(         new stringreader(csv),             csvpreference.standard_preference);          // need map field once, use nulls         string[] namemapping = new string[]{"name", null, null};          // create processors each row (otherwise every bean          // contain same map!)         product product;         while ((product = reader.read(product.class,              namemapping, createprocessors())) != null){             system.out.println(product.getname());         }     }      private static cellprocessor[] createprocessors() {         map<string, string> namemap = new hashmap<string, string>();         final cellprocessor[] processors = new cellprocessor[]{                 new mapcollector("name1", namemap),                  new mapcollector("name2", namemap),                  new mapcollector("name3", namemap)};         return processors;     }  } 

this outputs:

{name3=smith, name2=l, name1=john} {name3=jones, name2=p, name1=sally} 

you'll notice while processors execute on 3 columns, it's mapped bean once (hence nulls in namemapping array).

i've created processors each time row read, otherwise every bean using same map...which isn't want ;)


Comments

Popular posts from this blog

PHPMotion implementation - URL based videos (Hosted on separate location) -

c# - Unity IoC Lifetime per HttpRequest for UserStore -

I am trying to solve the error message 'incompatible ranks 0 and 1 in assignment' in a fortran 95 program. -